text
stringlengths
10
951k
source
stringlengths
39
44
California Department of Transportation The California Department of Transportation (Caltrans) is an executive department of the U.S. state of California. The department is part of the cabinet-level California State Transportation Agency (CalSTA). Caltrans is headquartered in Sacramento. Caltrans manages the state's highway system, which includes the California Freeway and Expressway System, and is involved with public transportation systems throughout the state. It supports Amtrak California and Amtrak's Capitol Corridor. In 2015, Caltrans released a new mission statement: "Provide a safe, sustainable, integrated and efficient transportation system to enhance California’s economy and livability." The earliest predecessor of Caltrans was the Bureau of Highways, which was created by the California Legislature and signed into law by Governor James Budd in 1895. This agency consisted of three commissioners who were charged with analyzing the state road system and making recommendations. At the time, there was no state highway system, since roads were purely a local responsibility. California's roads consisted of crude dirt roads maintained by county governments, as well as some paved roads within city boundaries, and this ad hoc system was no longer adequate for the needs of the state's rapidly growing population. After the commissioners submitted their report to the governor on November 25, 1896, the legislature replaced the Bureau with the Department of Highways. Due to the state's weak fiscal condition and corrupt politics, little progress was made until 1907, when the legislature replaced the Department of Highways with the Department of Engineering, within which there was a Division of Highways. California voters approved an $18 million bond issue for the construction of a state highway system in 1910, and the first California Highway Commission was convened in 1911. On August 7, 1912, the department broke ground on its first construction project, the section of El Camino Real between South San Francisco and Burlingame, which later became part of California State Route 82. The year 1912 also saw the founding of the Transportation Laboratory and the creation of seven administrative divisions, which are the predecessors of the 12 district offices in use . The original seven division headquarters were located in: In 1913, the California State Legislature began requiring vehicle registration and allocated the resulting funds to support regular highway maintenance. In 1921, the state legislature turned the Department of Engineering into the Department of Public Works. The history of Caltrans and its predecessor agencies during the 20th century was marked by many firsts. It was one of the first agencies in the United States to paint centerlines on highways statewide; the first to build a freeway west of the Mississippi River; the first to build a four-level stack interchange; the first to develop and deploy non-reflective raised pavement markers, better known as Botts' dots; and one of the first to implement dedicated freeway-to-freeway connector ramps for high-occupancy vehicle lanes. In late 1972, the legislature approved a reorganization, suggested by a study initiated by then-Governor Ronald Reagan, in which the Department of Public Works was merged with the Department of Aeronautics to become the modern California Department of Transportation. For administrative purposes, Caltrans divides the State of California into 12 districts, supervised by district offices. Most districts cover multiple counties; District 12 (Orange County) is the only district with one county. The largest districts by population are District 4 (San Francisco Bay Area) and District 7 (Los Angeles and Ventura counties). Like most state agencies, Caltrans maintains its headquarters in Sacramento, which is covered by District 3.
https://en.wikipedia.org/wiki?curid=7710
Continuation War The Continuation War was a conflict fought by Finland and Nazi Germany, as co-belligerents, against the Soviet Union (USSR) from 1941 to 1944, during World War II. In Russian historiography, the war is called the Soviet–Finnish Front of the Great Patriotic War. Germany regarded its operations in the region as part of its overall war efforts on the Eastern Front and provided Finland with critical material support and military assistance. The Continuation War began 15 months after the end of the Winter War, also fought between Finland and the USSR. There have been numerous reasons proposed for the Finnish decision to invade, with regaining territory lost during the Winter War being regarded as the most common. Other justifications for the conflict included President Ryti's vision of a Greater Finland and Commander-in-Chief Mannerheim's desire to retake Karelia. Plans for the attack were developed jointly between the "Wehrmacht" and a faction of Finnish political and military leaders with the rest of the government remaining ignorant. Despite the co-operation in this conflict, Finland never formally signed the Tripartite Pact that had established the Axis powers and justified its alliance with Germany as self-defence. In June 1941, with the start of the German invasion of the Soviet Union, the Finnish Defence Forces launched their offensive following Soviet airstrikes. By September 1941, Finland had reversed its post–Winter War concessions to the Soviet Union by retaking the Karelian Isthmus and Ladoga Karelia. However, the Finnish Army continued the offensive past the pre-1939 border with the conquest of East Karelia, including Petrozavodsk, as well as halting only around from the centre of Leningrad, where they participated in besieging the city by cutting its northern supply routes and digging in until 1944. In Lapland, joint German–Finnish forces failed to capture Murmansk or cut the Kirov (Murmansk) Railway, a transit route for lend-lease equipment to the USSR. The conflict stabilised with only minor skirmishes until the tide of the war turned against the Germans and the Soviet Union's strategic Vyborg–Petrozavodsk Offensive in June 1944. The attack drove the Finns from most of the territories they had gained during the war, but the Finnish Army halted the offensive in August 1944. Hostilities between Finland and the USSR ended with a ceasefire, which was called on 5 September 1944, formalised by the signing of the Moscow Armistice on 19 September 1944. One of the conditions of this agreement was the expulsion, or disarming, of any German troops in Finnish territory, which led to the Lapland War between the former co-belligerents. World War II was concluded formally for Finland and the minor Axis powers with the signing of the Paris Peace Treaties in 1947. The treaties resulted in the restoration of borders per the 1940 Moscow Peace Treaty, the ceding of the municipality of Petsamo () and the leasing of Porkkala Peninsula to the USSR. Furthermore, Finland was required to pay $300 million in war reparations to the USSR. 63,200 Finns and 23,200 Germans died or went missing during the war in addition to 158,000 and 60,400 wounded, respectively. Estimates of dead or missing Soviets range from 250,000 to 305,000 while 575,000 have been estimated to have been wounded or fallen sick. On 23 August 1939, the Soviet Union (USSR) and Germany signed the Molotov–Ribbentrop Pact, in which the two parties agreed to divide the independent countries of Finland, Estonia, Latvia, Lithuania, Poland, and Romania into spheres of interest, with Finland falling within the Soviet sphere. One week later, Germany invaded Poland, leading to the United Kingdom and France declaring war on Germany. The Soviet Union invaded eastern Poland on 17 September. Moscow turned its attention to the Baltic states, demanding that they allow Soviet military bases to be established and troops stationed on their soil. The Baltic governments acquiesced to these demands and signed agreements in September and October. In October 1939, the Soviet Union attempted to negotiate with Finland to cede Finnish territory on the Karelian Isthmus and the islands of the Gulf of Finland, and to establish a Soviet military base near the Finnish capital of Helsinki. The Finnish government refused, and the Red Army invaded Finland on 30 November 1939. The USSR was expelled from the League of Nations and was condemned by the international community for the illegal attack. Foreign support for Finland was promised, but very little actual help materialised, except from Sweden. The Moscow Peace Treaty concluded the 105-day Winter War on 13 March 1940 and started the Interim Peace. By the terms of the treaty, Finland ceded 11 percent of its national territory and 13 percent of its economic capacity to the Soviet Union. Some 420,000 evacuees were resettled from the ceded territories. Finland avoided total conquest of the country by the Soviet Union and retained its sovereignty. Prior to the war, Finnish foreign policy had been based on multilateral guarantees of support from the League of Nations and Nordic countries, but this policy was considered a failure. After the war, Finnish public opinion favored the reconquest of Finnish Karelia. The government declared national defence to be its first priority, and military expenditure rose to nearly half of public spending. Finland purchased and received donations of war materiel during and immediately after the Winter War. Likewise, Finnish leadership wanted to preserve the spirit of unanimity that was felt throughout the country during the Winter War. The divisive White Guard tradition of the Finnish Civil War's 16 May victory-day celebration was therefore discontinued. The Soviet Union had received the Hanko Naval Base, on Finland's southern coast near the capital Helsinki, where it deployed over 30,000 Soviet military personnel. Relations between Finland and the Soviet Union remained strained after the signing of the one-sided peace treaty, and there were disputes regarding the implementation of the treaty. Finland sought security against further territorial depredations by the USSR and proposed mutual defence agreements with Norway and Sweden, but these initiatives were quashed by Moscow. After the Winter War, Germany was viewed with distrust by the Finnish, as it was considered an ally of the Soviet Union. Nonetheless, the Finnish government sought to restore diplomatic relations with Germany, but also continued its Western-orientated policy and negotiated a war trade agreement with the United Kingdom. The agreement was renounced after the German invasion of Denmark and Norway on 9 April 1940 resulted in the UK cutting all trade and traffic communications with the Nordic countries. With the fall of France, a Western orientation was no longer considered a viable option in Finnish foreign policy. On 15 and 16 June, the Soviet Union occupied the Baltic states without resistance and Soviet puppet regimes were installed. Within two months Estonia, Latvia and Lithuania were incorporated into the USSR as Soviet republics and by mid-1940, the two remaining northern democracies, Finland and Sweden, were encircled by the hostile states of Germany and the Soviet Union. On 23 June, shortly after the Soviet occupation of the Baltic states began, Soviet Foreign Minister Vyacheslav Molotov contacted the Finnish government demanding that a mining licence be issued to the USSR for the nickel mines in the municipality of Petsamo () or, alternatively, permit the establishment of a joint Soviet-Finnish company to operate there. A licence to mine the deposit had already been granted to a British-Canadian company, and the demand was rejected by Finland. The following month, the Soviets demanded that Finland destroy the fortifications on the Åland islands and grant the USSR the right to use Finnish railways to transport Soviet troops to the newly-acquired Soviet base at Hanko. The Finns very reluctantly agreed to these demands. On 24 July, Molotov accused the Finnish government of persecuting the Finland – Soviet Union Peace and Friendship Society, a pro-communist group, and soon afterwards publicly declared support for the group. The society organised demonstrations in Finland, some of which turned into riots. Russian sources, such as the book "Stalin's Missed Chance", maintain that Soviet policies leading up to the Continuation War were best explained as defensive measures by offensive means. The Soviet division of occupied Poland with Germany, the Soviet occupations of Lithuania, Latvia and Estonia, and the Soviet invasion of Finland in the Winter War are described as elements in the Soviet construction of a security zone, or buffer region, against the perceived threat from the capitalist powers of Western Europe. The Russian sources see the post-World War II establishment of Soviet satellite states in the Warsaw Pact countries and the Finno-Soviet Treaty of 1948 as the culmination of the Soviet defence plan. Western historians, such as Norman Davies and John Lukacs, dispute this view and describe pre-war Soviet policy as an attempt to stay out of the war and regain land lost after the fall of the Russian Empire. On 31 July 1940, German Chancellor Adolf Hitler gave the order to plan an assault on the Soviet Union, meaning Germany had to reassess its position regarding both Finland and Romania. Until then, Germany had rejected Finnish appeals to purchase arms, but with the prospect of an invasion of Russia, this policy was reversed, and in August the secret sale of weapons to Finland was permitted. Military authorities signed an agreement on 12 September, and an official exchange of diplomatic notes was sent on 22 September. At the same time, German troops were allowed to transit through Sweden and Finland. This change in policy meant Germany had effectively redrawn the border of the German and Soviet spheres of influence, violating the Molotov-Ribbentrop Pact. In response to this new situation, Molotov visited Berlin on 12–13 November 1940. He requested that Germany withdraw its troops from Finland and stop enabling Finnish anti-Soviet sentiments. He also reminded the Germans of the 1939 Soviet–German non-aggression pact. Hitler inquired how the USSR planned to settle the "Finnish question", to which Molotov responded that it would mirror the events in Bessarabia and the Baltic states. Hitler rejected this course of action. In December, the Soviet Union, Germany and the UK all voiced opinions concerning suitable Finnish presidential candidates. Risto Ryti was the sole candidate not objected to by any of the three powers and was elected on 19 December. In January 1941, Moscow demanded Finland relinquish control of the Petsamo mining area to the Soviets, but Finland, emboldened by a rebuilt defence force and German support, rejected the proposition. On 18 December 1940, Hitler officially approved Operation Barbarossa, paving the way for the German invasion of the Soviet Union, in which he expected both Finland and Romania to participate. During this period, Finnish Major General Paavo Talvela met with German Franz Halder and Hermann Göring in Berlin. This was the first time the Germans had advised the Finnish government, in carefully couched diplomatic terms, that they were preparing for war with the Soviet Union. Outlines of the actual plan were revealed in January 1941 and regular contact between Finnish and German military leaders began in February. In the late spring of 1941, the USSR made a number of goodwill gestures to prevent Finland from completely falling under German influence. Ambassador Ivan Zotov was replaced with the more flexible Pavel Orlov. Furthermore, the Soviet government announced that it no longer opposed a rapprochement between Finland and Sweden. These conciliatory measures, however, did not have any effect on Finnish policy. Finland wished to re-enter World War II mainly because of the Soviet invasion of Finland during the Winter War, which had taken place after Finnish intentions of relying on the League of Nations and Nordic neutrality to avoid conflicts had failed from lack of outside support. Finland primarily aimed to reverse its territorial losses from the March 1940 Moscow Peace Treaty and, depending on the success of the German invasion of the Soviet Union, to possibly expand its borders, especially into East Karelia. Some right-wing groups, such as the Academic Karelia Society, supported a Greater Finland ideology. The matter of when and why Finland prepared for war is still somewhat opaque. Historian William R. Trotter stated that "it has so far proven impossible to pinpoint the exact date on which Finland was taken into confidence about Operation Barbarossa" and that "neither the Finns nor the Germans were entirely candid with one another as to their national aims and methods. In any case, the step from contingency planning to actual operations, when it came, was little more than a formality." The inner circle of Finnish leadership, led by Ryti and Mannerheim, actively planned joint operations with Germany under a veil of ambiguous neutrality and without formal agreements, after an alliance with Sweden proved fruitless—according to a meta-analysis by Finnish historian Olli Vehviläinen. He likewise refuted the so-called "driftwood theory" that Finland was merely a piece of driftwood swept uncontrollably in the rapids of great-power politics. Even then, most historians conclude that Finland did not have any realistic alternatives to cooperating with Germany at the time. On 20 May, the Germans invited a number of Finnish officers to discuss the coordination of Operation Barbarossa. The participants met on 25–28 May in Salzburg and Berlin, and continued their meeting in Helsinki from 3 to 6 June. They agreed upon the arrival of German troops, Finnish mobilization, and a general division of operations. They also agreed that the Finnish Army would start mobilization on 15 June, but the Germans did not reveal the actual date of the assault. The Finnish decisions were made by the inner circle of political and military leaders, without the knowledge of the rest of the government, who were not informed until 9 June that mobilization of reservists, due to tensions between Germany and the Soviet Union, would be required. Finland never signed the Tripartite Pact, which had been signed by all "de jure" Axis powers. The Finnish leadership and Mannerheim, in particular, clearly stated they would fight against the Soviets only to the extent necessary to redress the balance of the 1940 treaty. For Hitler, the distinction was irrelevant, as he saw Finland as an ally. The Northern Front () of the Leningrad Military District was commanded by Lieutenant General Markian Popov and numbered around 450,000 soldiers in 18 divisions and 40 independent battalions in the Finnish region. During the Interim Peace, the Soviet Military had relaid operational plans to conquer Finland, but with Operation Barbarossa, the USSR required its best units and latest materiel to be deployed against the Germans, and thus abandoned plans for a renewed offensive against Finland. The 23rd Army was deployed in the Karelian Isthmus, the 7th Army to Ladoga Karelia and the 14th Army to the Murmansk–Salla area of Lapland. The Northern Front also commanded eight aviation divisions. As the initial German strike against the Soviet Air Forces had not affected air units located near Finland, it could deploy around 700 aircraft supported by a number of Soviet Navy wings. The Red Banner Baltic Fleet comprised 2 battleships, 2 light cruisers, 47 destroyers or large torpedo boats, 75 submarines, over 200 smaller craft as well as hundreds of aircraft—and outnumbered the "Kriegsmarine". The Finnish Army () mobilised between 475,000 and 500,000 soldiers in 14 divisions and 3 brigades for the invasion, commanded by Field Marshal () Mannerheim. The army was organised as follows: Although initially deployed for a static defence, the Finnish Army was to later launch an attack to the south, on both sides of Lake Ladoga, putting pressure on Leningrad and thus supporting the advance of the German Army Group North. Finnish intelligence had overestimated the strength of the Red Army, when in fact it was numerically inferior to Finnish forces at various points along the border. The army, especially its artillery, was stronger than it had been during the Winter War but included only one armoured battalion and had a general lack of motorised transportation. The Finnish Air Force () had 235 aircraft in July 1941 and 384 by September 1944, despite losses. Even with the increase in aircraft, the air force was constantly outnumbered by the Soviets. The Army of Norway, or , comprising four divisions totaling 67,000 German soldiers, held the arctic front, which stretched approximately through Finnish Lapland. This army would also be tasked with striking Murmansk and the Kirov (Murmansk) Railway during Operation Silver Fox. The Army of Norway was under the direct command of the "Oberkommando des Heeres" () and was organised into Mountain Corps Norway and XXXVI Mountain Corps with the Finnish Finnish III Corps and 14th Division attached to it. The "Oberkommando der Luftwaffe" () assigned 60 aircraft from "Luftflotte 5" (Air Fleet 5) to provide air support to the Army of Norway and the Finnish Army, in addition to its main responsibility of defending Norwegian air space. In contrast to the front in Finland, a total of 149 divisions and 3,050,000 soldiers were deployed for the rest of Operation Barbarossa. In the evening of 21 June 1941, German mine-layers hiding in the Archipelago Sea deployed two large minefields across the Gulf of Finland. Later that night, German bombers flew along the gulf to Leningrad, mining the harbour and the river Neva, making a refueling stop at Utti, Finland, on the return leg. In the early hours of 22 June, Finnish forces launched Operation Kilpapurjehdus ("Regatta"), deploying troops in the demilitarised Åland Islands. Although the 1921 Åland convention had clauses allowing Finland to defend the islands in the event of an attack, the coordination of this operation with the German invasion and the arrest of the Soviet consulate staff stationed on the islands, meant that the deployment was a deliberate violation of the treaty, according to Finnish historian Mauno Jokipii. On the morning of 22 June Adolf Hitler's proclamation read: "Together with their Finnish comrades in arms the heroes from Narvik stand at the edge of the Arctic Ocean. German troops under command of the conqueror of Norway, and the Finnish freedom fighters under their Marshal's command, are protecting Finnish territory." Following the launch of Operation Barbarossa at around 3:15 a.m. on 22 June 1941, the Soviet Union sent seven bombers on a retaliatory airstrike into Finland, hitting targets at 6:06 a.m. Helsinki time as reported by the Finnish coastal defence ship "Väinämöinen". On the morning of 25 June, the Soviet Union launched another air offensive, with 460 fighters and bombers targeting 19 airfields in Finland, however inaccurate intelligence and poor bombing accuracy resulted in several raids hitting Finnish cities, or municipalities, causing considerable damage. 23 Soviet bombers were lost in this strike while the Finnish forces lost no aircraft. Although the USSR claimed that the airstrikes were directed against German targets, particularly airfields, in Finland, the Finnish government used the attacks as justification for the approval of a "defensive war". According to historian David Kirby, the message was intended more for public opinion in Finland than abroad, where the country was viewed as an ally of the Axis powers. The Finnish plans for the offensive in Ladoga Karelia were finalised on 28 June 1941, and the first stages of the operation began on 10 July. By 16 July, VI Corps had reached the northern shore of Lake Ladoga, dividing the Soviet 7th Army, which had been tasked with defending the area. The USSR struggled to contain the German assault, and soon the Soviet high command, "Stavka", pulled all available units stationed along the Finnish border into the beleaguered front line. Additional reinforcements were drawn from the 237th Rifle Division and the Soviet 10th Mechanised Corps, excluding the 198th Motorised Division, both of which were stationed in Ladoga Karelia, but this stripped much of the reserve strength of the Soviet units defending that area. The Finnish II Corps started its offensive in the north of the Karelian Isthmus on 31 July. Other Finnish forces reached the shores of Lake Ladoga on 9 August, encircling most of the three defending Soviet divisions on the northwestern coast of the lake in a pocket ("motti" in Finnish); these divisions were later evacuated across the lake. On 22 August, the Finnish IV Corps began its offensive south of II Corps and advanced towards Vyborg (). By 23 August, II Corps had reached the Vuoksi River to the east and encircled the Soviet forces defending Vyborg. The Soviet order to withdraw came too late, resulting in significant losses in materiel, although most of the troops were later evacuated via the Koivisto Islands. After suffering severe losses, the Soviet 23rd Army was unable to halt the offensive, and by 2 September the Finnish Army had reached the old 1939 border. The advance by Finnish and German forces split the Soviet Northern Front into the Leningrad Front and the Karelian Front. On 31 August, Finnish Headquarters ordered II and IV Corps, which had advanced the furthest, to halt their advance along a line that ran from the Gulf of Finland via Beloostrov– Sestra River– Okhta River–Lembolovo to Ladoga. The line ran past the former 1939 border, and approximately from Leningrad. There, they were ordered to take up a defensive position. On 1 September, the IV Corps engaged and defeated the Soviet 23rd Army near the town of Porlampi. Sporadic fighting continued around Beloostrov until the Soviets evicted the Finns on 20 September. The front on the Isthmus stabilised and the Siege of Leningrad began. The Finnish Army of Karelia started its attack in East Karelia towards Petrozavodsk, Lake Onega and the Svir River on 9 September. German Army Group North advanced from the south of Leningrad towards the Svir River and captured Tikhvin but were forced to retreat to the Volkhov River by Soviet counterattacks. Soviet forces repeatedly attempted to expel the Finns from their bridgehead south of the Svir during October and December but were repulsed; Soviet units attacked the German 163rd Infantry Division in October 1941, which was operating under Finnish command across the Svir, but failed to dislodge it. Despite these failed attacks, the Finnish attack in East Karelia had been blunted and their advance had halted by 6 December. During the five-month campaign, the Finns suffered 75,000 casualties, of whom 26,355 had died, while the Soviets had 230,000 casualties, of whom 50,000 became prisoners of war. The German objective in Finnish Lapland was to take Murmansk and cut the Kirov (Murmansk) Railway running from Murmansk to Leningrad by capturing Salla and Kandalaksha. Murmansk was the only year-round ice-free port in the north and a threat to the nickel mine at Petsamo. The joint Finnish–German Operation Silver Fox (; ) was started on 29 June 1941 by the German Army of Norway, which had the Finnish 3rd and 6th Divisions under its command, against the defending Soviet 14th Army and 54th Rifle Division. By November, the operation had stalled from the Kirov Railway due to unacclimatised German troops, heavy Soviet resistance, poor terrain, arctic weather and diplomatic pressure by the United States on the Finns regarding the lend-lease deliveries to Murmansk. The offensive and its three sub-operations failed to achieve their objectives. Both sides dug in and the arctic theatre remained stable, excluding minor skirmishes, until the Soviet Petsamo–Kirkenes Offensive in October 1944. The crucial arctic lend-lease convoys from the US and the UK via Murmansk and Kirov Railway to the bulk of the Soviet forces continued throughout World War II. The US supplied almost $11 billion in materials: 400,000 jeeps and trucks; 12,000 armored vehicles (including 7,000 tanks, which could equip some 20 US armoured divisions); 11,400 aircraft; and of food. As a similar example, British shipments of Matilda, Valentine and Tetrarch tanks accounted for only 6 percent of total Soviet tank production, but over 25 percent of medium and heavy tanks produced for the Red Army. The "Wehrmacht" rapidly advanced deep into Soviet territory early in the Operation Barbarossa campaign, leading the Finnish government to believe that Germany would defeat the Soviet Union quickly. President Ryti envisioned a Greater Finland, where Finland and other Finnic people would live inside a "natural defence borderline" by incorporating the Kola Peninsula, East Karelia and perhaps even northern Ingria. In public, the proposed frontier was introduced with the slogan "short border, long peace". Some members of the Finnish Parliament, such as members of the Social Democratic Party and the Swedish People's Party, opposed the idea, arguing that maintaining the 1939 frontier would be enough. Finnish Commander-in-Chief, Field Marshal C. G. E. Mannerheim, often called the war an anti-Communist crusade, hoping to defeat "Bolshevism once and for all". On 10 July, Mannerheim drafted his order of the day, the Sword Scabbard Declaration, in which he pledged to liberate Karelia; in December 1941 in private letters, he made known his doubts of the need to push beyond the previous borders. The Finnish government assured the United States that it was unaware of the order. According to Vehviläinen, most Finns thought that the scope of the new offensive was only to regain what had been taken in the Winter War. He further stated that the term 'Continuation War' was created at the start of the conflict by the Finnish government to justify the invasion to the population as a continuation of the defensive Winter War. The government also wished to emphasise that it was not an official ally of Germany, but a 'co-belligerent' fighting against a common enemy and with purely Finnish aims. Vehviläinen wrote that the authenticity of the government's claim changed when the Finnish Army crossed the old frontier of 1939 and began to annex Soviet territory. British author Jonathan Clements asserted that by December 1941, Finnish soldiers had started questioning whether they were fighting a war of national defence or foreign conquest. By the autumn of 1941, the Finnish military leadership started to doubt Germany's capability to finish the war quickly. The Finnish Defence Forces suffered relatively severe losses during their advance, and, overall, German victory became uncertain as German troops were halted near Moscow. German troops in northern Finland faced circumstances they were unprepared for and failed to reach their targets. As the front lines stabilised, Finland attempted to start peace negotiations with the USSR. Mannerheim refused to assault Leningrad and inextricably tie Finland to Germany, regarding his objectives for the war to be achieved, a decision which angered the Germans. Due to the war effort, the Finnish economy suffered from a lack of labour, as well as food shortages and increased prices. To combat this, the Finnish government demobilised part of the army to prevent industrial and agricultural production from collapsing. In October, Finland informed Germany that it would need of grain to manage until next year's harvest. The German authorities would have rejected the request, but Hitler himself agreed. Annual grain deliveries of equaled almost half of the Finnish domestic crop. In November, Finland joined the Anti-Comintern Pact. Finland maintained good relations with a number of other Western powers. Foreign volunteers from Sweden and Estonia were among the foreigners who joined Finnish ranks; Infantry Regiment 200, called ("Finnish boys"), mostly comprised Estonians, while the Swedes mustered the Swedish Volunteer Battalion. The Finnish government stressed that Finland was fighting as a co-belligerent with Germany against the USSR only to protect itself and that it was still the same democratic country as it had been in the Winter War. For example, Finland maintained diplomatic relations with the exiled Norwegian government and more than once criticised German occupation policy in Norway. Relations between Finland and the United States were more complex; the US public was sympathetic to the "brave little democracy" and had anti-communist sentiments. At first, the United States sympathised with the Finnish cause, but the situation became problematic after the Finnish Army crossed the 1939 border. Finnish and German troops were a threat to the Kirov Railway and the northern supply line between the Western Allies and the Soviet Union. On 25 October 1941, the US demanded that Finland cease all hostilities against the USSR and withdraw behind the 1939 border. In public, President Ryti rejected the demands, but in private, he wrote to Mannerheim on 5 November asking him to halt the offensive. Mannerheim agreed and secretly instructed General Hjalmar Siilasvuo and his III Corps to end the assault on the Kirov Railway. On 12 July 1941, the United Kingdom signed an agreement of joint action with the Soviet Union. Under German pressure, Finland closed the British legation in Helsinki, cutting diplomatic relations with Britain on 1 August. The most sizeable British action on Finnish soil was the Raid on Kirkenes and Petsamo, an aircraft-carrier strike on German and Finnish ships on 31 July 1941. The attack accomplished little, except the loss of one Norwegian ship and three British aircraft, but it was intended to demonstrate British support for its Soviet ally. From September to October in 1941, a total of 39 Hawker Hurricanes of No. 151 Wing RAF, based at Murmansk, reinforced and provided pilot-training to the Soviet Air Forces during Operation Benedict to protect arctic convoys. On 28 November, the British government presented Finland with an ultimatum demanding that the Finns cease military operations by 3 December. Unofficially, Finland informed the Allies that Finnish troops would halt their advance in the next few days. The reply did not satisfy London, which declared war on Finland on 6 December. The Commonwealth nations of Canada, Australia, India and New Zealand soon followed suit. In private, British Prime Minister Winston Churchill had sent a letter to Mannerheim on 29 November, in which he was "deeply grieved" that the UK would have to declare war on Finland because of the UK's alliance with the USSR. Mannerheim repatriated British volunteers under his command to the United Kingdom via Sweden. According to Clements, the war was mostly for appearances' sake. Unconventional warfare was fought in both the Finnish and Soviet wildernesses. Finnish long-range reconnaissance patrols, organised both by the Intelligence Division's Detached Battalion 4 and by local units, patrolled behind Soviet lines. Soviet partisans, both resistance fighters and regular long-range patrol detachments, conducted a number of operations in Finland and in Eastern Karelia from 1941 to 1944. In summer 1942, the USSR formed the 1st Partisan Brigade. The unit was 'partisan' in name only, as it was essentially 600 men and women on long-range patrol intended to disrupt Finnish operations. The 1st Partisan Brigade was able to infiltrate beyond Finnish patrol lines, but was intercepted, and rendered ineffective, in August 1942 at Lake Segozero. Irregular partisans distributed propaganda newspapers, such as Finnish translations of the official Communist Party paper "Pravda" (). Notable Soviet politician, Yuri Andropov, took part in these partisan guerrilla actions. Finnish sources state that, although Soviet partisan activity in East Karelia disrupted Finnish military supply and communication assets, almost two thirds of the attacks targeted civilians, killing 200 and injuring 50, including children and elderly. Between 1942 and 1943, military operations were limited, although the front did see some action. In January 1942, the Soviet Karelian Front attempted to retake Medvezhyegorsk (), which had been lost to the Finns in late 1941. With the arrival of spring in April, Soviet forces went on the offensive on the Svir River front, in the Kestenga () region further north in Lapland as well as in the far north at Petsamo with the 14th Rifle Division's amphibious landings supported by the Northern Fleet. All Soviet offensives started promisingly, but due either to the Soviets overextending their lines or stubborn defensive resistance, the offensives were repulsed. After Finnish and German counterattacks in Kestenga, the front lines were generally stalemated. In September 1942, the USSR attacked again at Medvezhyegorsk, but despite five days of fighting, the Soviets only managed to push the Finnish lines back on a roughly -long stretch of the front. Later that month, a Soviet landing with two battalions in Petsamo was defeated by a German counterattack. In November 1941, Hitler decided to separate the German forces fighting in Lapland from the Army of Norway and create the Army of Lapland, commanded by Eduard Dietl through . In June 1942, the Army of Lapland was redesignated the 20th Mountain Army. In the early stages of the war, the Finnish Army overran the former 1939 border, but ceased their advance from the center of Leningrad. Multiple authors have stated that Finland participated in the Siege of Leningrad (), but the full extent and nature of their participation is debated and a clear consensus has yet to emerge. American historian David Glantz, writes that the Finnish Army generally maintained their lines and contributed little to the siege from 1941 to 1944, whereas Russian historian Nikolai Baryshnikov stated in 2002 that Finland tacitly supported Hitler's starvation policy for the city. However, in 2009 British historian Michael Jones refuted Baryshnikov's claim and asserted that the Finnish Army cut off the city's northern supply routes but did not take further military action. In 2006, American author Lisa A. Kirchenbaum wrote that the siege started "when German and Finnish troops severed all land routes in and out of Leningrad." According to Clements, Mannerheim personally refused Hitler's request of assaulting Leningrad during their meeting on 4 June 1942. Mannerheim explained to Hitler that "Finland had every reason to wish to stay out of any further provocation of the Soviet Union." In 2014, author Jeff Rutherford described the city as being "ensnared" between the German and Finnish armies. British historian John Barber described it as a "siege by the German and Finnish armies from 8 September 1941 to 27 January 1944 [...]" in his foreword in 2017. Likewise, in 2017, Alexis Peri wrote that the city was "completely cut off, save a heavily patrolled water passage over Lake Ladoga" by "Hitler's Army Group North and his Finnish allies." The 150 speedboats, 2 minelayers and 4 steamships of the Finnish Ladoga Naval Detachment, as well as numerous shore batteries, had been stationed on Lake Ladoga since August 1941. Finnish Lieutenant General Paavo Talvela proposed on 17 May 1942 to create a joint Finnish–German–Italian unit on the lake to disrupt Soviet supply convoys to Leningrad. The unit was named Naval Detachment K and comprised four Italian MAS torpedo motorboats of the XII Squadriglia MAS, four German KM-type minelayers and the Finnish torpedo-motorboat "Sisu". The detachment began operations on August 1942 and sank numerous smaller Soviet watercraft and flatboats and assaulted enemy bases and beach fronts until it was dissolved in the winter of 1942–43. Twenty-three Siebel ferries and nine infantry transports of the German "Einsatzstab Fähre Ost" were also deployed to Lake Ladoga and unsuccessfully assaulted the island of Sukho, which protected the main supply route to Leningrad, on October 1942. Despite the siege of the city, the Soviet Baltic Fleet was still able to operate from Leningrad. The Finnish Navy's flagship had been sunk in September 1941 in the gulf by mines during the failed diversionary Operation Nordwind (1941). In early 1942, Soviet forces recaptured the island of Gogland, but lost it and the Bolshoy Tyuters islands to Finnish forces later in spring 1942. During the winter between 1941 and 1942, the Soviet Baltic Fleet decided to use their large submarine fleet in offensive operations. Though initial submarine operations in the summer of 1942 were successful, the and Finnish Navy soon intensified their anti-submarine efforts, making Soviet submarine operations later in 1942 costly. The underwater offensive carried out by the Soviets convinced the Germans to lay anti-submarine nets as well as supporting minefields between Porkkala Peninsula and Naissaar, which proved to be an insurmountable obstacle for Soviet submarines. On the Arctic Ocean, Finnish radio intelligence intercepted Allied messages on supply convoys to Murmansk, such as PQ 17 and PQ 18, and relayed the information to the "Abwehr", German intelligence. On 19 July 1941, the Finns created a military administration in occupied East Karelia with the goal of preparing the region for eventual incorporation into Finland. The Finns aimed to expel the Russian portion of the local population (constituting to about a half), who were deemed "non-national", from the area once the war was over, and replace them with the local Finnic peoples, such as Karelians, Finns, Estonians, Ingrians and Vepsians. Most of the East Karelian population had already been evacuated before the Finnish forces arrived, but about 85,000 people — mostly elderly, women and children — were left behind, less than half of whom were Karelians. A significant number of civilians, almost 30 percent of the remaining Russians, were interned in concentration camps. The winter between 1941 and 1942 was particularly harsh for the Finnish urban population due to poor harvests and a shortage of agricultural labourers. However, conditions were much worse for Russians in Finnish concentration camps. More than 3,500 people died, mostly from starvation, amounting to 13.8 per cent of those detained, while the corresponding figure for the free population of the occupied territories was 2.6 per cent, and 1.4 per cent for Finland. Conditions gradually improved, ethnic discrimination in wage levels and food rations was terminated, and new schools were established for the Russian-speaking population the following year, after Commander-in-Chief Mannerheim called for the International Committee of the Red Cross from Geneva to inspect the camps. By the end of the occupation, mortality rates had dropped to the same levels as in Finland. Finland had a small Jewish population of approximately 2,300 people, of whom 300 were refugees. They had full civil rights and fought with other Finns in the ranks of the Finnish Army. The field synagogue in East Karelia was one of the very few functioning synagogues on the Axis side during the war. There were several cases of Jewish officers of the Finnish Army being awarded the German Iron Cross, which they declined. German soldiers were treated by Jewish medical officers—who sometimes saved the soldiers' lives. German command mentioned Finnish Jews at the Wannsee Conference in January 1942, wishing to transport them to the Majdanek concentration camp in occupied Poland. "SS" leader Heinrich Himmler also raised the topic of Finnish Jews during his visit in Finland in the summer of 1942; Finnish Prime Minister Jukka Rangell replied that Finland did not have a Jewish question. In November 1942, the Minister of Interior Toivo Horelli and the head of State Police Arno Anthoni secretly deported eight Jewish refugees to the "Gestapo", raising protests among Finnish Social Democrat Party ministers. Only one of the deportees survived. After the incident, the Finnish government refused to transfer any more Jews to German detainment. Finland began to seek an exit from the war after the German defeat at the Battle of Stalingrad in February 1943. Prime Minister Edwin Linkomies formed a new cabinet in March 1943 with peace as the top priority. Similarly, the Finns were distressed by the Allied Invasion of Sicily in July and the German defeat in the Battle of Kursk in August. Negotiations were conducted intermittently during 1943–1944 between Finland, the Western Allies and the USSR, but no agreement was reached. Stalin decided to force Finland to surrender with a bombing campaign on Helsinki, starting in February 1944. It included three major air attacks totaling over 6,000 sorties. Finnish anti-aircraft defence repelled the raids and only five per cent of the dropped bombs hit their planned targets. In Helsinki, decoy searchlights and fires were placed outside the city to deceive Soviet bombers into dropping their payloads on unpopulated areas. Major air attacks also hit Oulu and Kotka, but pre-emptive radio intelligence and effective defence kept the number of casualties low. The Soviet Leningrad–Novgorod Offensive finally lifted the Siege of Leningrad on 26–27 January 1944 and pushed Army Group North to Ida-Viru County on the Estonian border. Stiff German and Estonian defence in Narva from February to August prevented the use of occupied Estonia as a favourable base for Soviet amphibious and air assaults against Helsinki and other Finnish coastal cities in support of a land offensive. Field Marshal Mannerheim had reminded the German command on numerous occasions that should German troops withdraw from Estonia, Finland would be forced to make peace, even on extremely unfavourable terms. Finland would abandon peace negotiations in April 1944 due to the unfavourable terms the USSR demanded. On 9 June 1944, the Soviet Leningrad Front launched an offensive against Finnish positions on the Karelian Isthmus and in the area of Lake Ladoga, timed to coincide with Operation Overlord in Normandy as agreed during the Tehran Conference. The main objective of the offensive was to force Finland out of the war. Along the -wide breakthrough, the Red Army concentrated 3,000 guns and mortars. In some places, the concentration of artillery pieces exceeded 200 guns for every kilometre of front or one for every . Soviet artillery fired over 80,000 rounds along the front on the Karelian Isthmus. On the second day of the offensive, the artillery barrages and superior number of Soviet forces crushed the main Finnish defence line. The Red Army penetrated the second line of defence, the Vammelsuu–Taipale line (VT line), by the sixth day and recaptured Viipuri with insignificant resistance on 20 June. The Soviet breakthrough on the Karelian Isthmus forced the Finns to reinforce the area, thus allowing the concurrent Soviet offensive in East Karelia to meet less resistance and to recapture Petrozavodsk by 28 June 1944. On 25 June, the Red Army reached the third line of defence, the Viipuri–Kuparsaari–Taipale line (VKT line), and the decisive Battle of Tali-Ihantala began, which has been described as the largest battle in Nordic military history. By this point, the Finnish Army had retreated around to approximately the same line of defence they had held at the end of the Winter War. Finland especially lacked modern anti-tank weaponry that could stop Soviet heavy armour, such as the KV-1 or IS-2. Thus, German Foreign Minister Joachim von Ribbentrop offered German hand-held "Panzerfaust" and "Panzerschreck" antitank weapons in exchange for a guarantee that Finland would not seek a separate peace with the USSR. On 26 June, President Risto Ryti gave the guarantee as a personal undertaking, which he, Field Marshal Mannerheim and Prime Minister Edwin Linkomies intended to legally last only for the remainder of Ryti's presidency. In addition to delivering thousands of anti-tank weapons, Hitler sent the 122nd Infantry Division and the half-strength 303rd Assault Gun Brigade armed with Sturmgeschütz III tank destroyers as well as the Luftwaffe's Detachment Kuhlmey to provide temporary support in the most vulnerable sectors. With the new supplies and assistance from Germany, the Finnish Army halted the numerically and materially superior Soviet advance at Tali-Ihantala on 9 July 1944 and stabilised the front. More battles were fought toward the end of the war, the last of which was the Battle of Ilomantsi, fought between 26 July and 13 August 1944 and resulting in a Finnish victory with the destruction of two Soviet divisions. Resisting the Soviet offensive had exhausted Finnish resources. Despite German support under the Ryti-Ribbentrop Agreement, it was asserted that the country was unable to blunt another major offensive. Soviet victories against German Army Groups Center and North during Operation Bagration made the situation even more dire for Finland. With no imminent further Soviet offensives, Finland sought to leave the war. On 1 August, President Ryti resigned and on 4 August, Field Marshal Mannerheim was sworn in as the new president. He annulled the agreement between Ryti and Ribbentrop on 17 August, thus allowing Finland to again sue for peace with the USSR; peace terms from Moscow arrived on 29 August. Finland was required to return to the borders agreed to in the 1940 Moscow Peace Treaty, demobilise its armed forces, fulfill war reparations and cede the municipality of Petsamo. The Finns were also required to immediately end any diplomatic relations with Germany and expel the from Finnish territory by 15 September 1944; any troops remaining were to be disarmed, arrested and turned over to the Allies. The Parliament of Finland accepted the terms in a secret meeting on 2 September and requested that official negotiations for an armistice begin. The Finnish Army implemented a ceasefire at 8:00 a.m. Helsinki time on 4 September; the Red Army followed suit a day later. On 14 September, a delegation led by Finnish Prime Minister Antti Hackzell and Foreign Minister Carl Enckell began negotiating, with the USSR and the United Kingdom, the final terms of the Moscow Armistice, which eventually included additional stipulations from the Soviets. They were presented by Molotov on 18 September and accepted by the Finnish Parliament a day later. The motivations for the Soviet peace agreement with Finland are debated. Several Western historians stated that the original Soviet designs for Finland were no different from their designs for the Baltic countries. American political scientist Dan Reiter asserted that for Moscow, the control of Finland was necessary. Reiter and British historian Victor Rothwell both quoted Molotov telling his Lithuanian counterpart in 1940, when the USSR effectively annexed Lithuania, that minor states such as Finland, "will be included within the honourable family of Soviet peoples." Reiter stated that concern over severe losses pushed Stalin into accepting a limited outcome in the war rather than pursuing annexation, although some Soviet documents called for military occupation of Finland. He also wrote that Stalin had described territorial concessions, reparations and military bases as his objective with Finland to representatives from the UK, in December 1941, and the US, in March 1943, as well as the Tehran Conference. He believed that in the end "Stalin's desire to crush Hitler quickly and decisively without distraction from the Finnish sideshow" concluded the war. Russian historian Nikolai Baryshnikov disputed the view that the Soviet Union sought to deprive Finland of its independence. He argued that there is no documentary evidence for such claims and that the Soviet government was always open for negotiations. Baryshnikov cited, for example, the then-public-information chief of Finnish Headquarters, Major Kalle Lehmus, to show that Finnish leadership had learned of the limited Soviet plans for Finland by at least July 1944 after intelligence revealed that some Soviet divisions were to be transferred to reserve in Leningrad. Finnish historian Heikki Ylikangas stated similar findings in 2009. According to him, the USSR refocused its efforts in the summer of 1944, from the Finnish front to defeating Germany and that Mannerheim received intelligence from Colonel Aladár Paasonen in June 1944 that the Soviet Union was aiming for peace, not occupation. According to Finnish historians, the casualties of the Finnish Defence Forces amounted to 63,204 dead or missing and around 158,000 wounded. Officially, the Soviets captured 2,377 Finnish POWs, although Finnish researchers estimated the number to be around 3,500 prisoners. A total of 939 Finnish civilians died in air raids and 190 civilians were killed by Soviet partisans. Germany suffered approximately 84,000 casualties in the Finnish front: 16,400 killed, 60,400 wounded and 6,800 missing. In addition to the original peace terms of restoring the 1940 border, Finland was required to pay war reparations to the USSR, conduct domestic war-responsibility trials, lease Porkkala Peninsula to the Soviets as well as ban fascist elements and allow left-wing groups, such as the Communist Party of Finland. A Soviet-led Allied Control Commission was installed to enforce and monitor the peace agreement in Finland. The requirement to disarm or expel any German troops left on Finnish soil by 15 September 1944 eventually escalated into the Lapland War between Finland and Germany and the evacuation of the 200,000-strong 20th Mountain Army to Norway. The Soviet demand for $600 million in war indemnities was reduced to $300 million (equivalent to $(/1000) billion in ), most likely due to pressure from the US and the UK. After the ceasefire, the USSR insisted that the payments should be based on 1938 prices, which doubled the de facto amount. The temporary Moscow Armistice was finalised without changes later in the Paris Peace Treaties, 1947. Henrik Lunde noted that Finland survived World War II without losing its independence—unlike many of Germany's allies. Likewise, Helsinki, along with Moscow, was the only capital of a World War II combatant nation that was not occupied in continental Europe. In the longer term, Peter Provis analysed that by following self-censorship and limited appeasement policies as well as by fulfilling the USSR's demands, Finland avoided the fate of other nations that were annexed by the Soviets. Many civilians who had been displaced after the Winter War had moved back into Karelia during the Continuation War and now had to be evacuated from Karelia again. Of the 260,000 civilians who had returned Karelia, only 19 chose to remain and become Soviet citizens. Most of the Ingrian Finns, together with Votes and Izhorians living in German-occupied Ingria, had been evacuated to Finland in 1943–1944. After the armistice, Finland was forced to return the evacuees. Soviet authorities did not allow the 55,733 returnees to resettle in Ingria and instead deported the Ingrian Finns to central regions of the USSR. The war is considered a Soviet victory. According to Finnish historians, Soviet casualties in the Continuation War were not accurately recorded and various approximations have arisen. Russian historian Grigori Krivosheev estimated in 1997 that around 250,000 were killed or missing in action while 575,000 were medical casualties (385,000 wounded and 190,000 sick). Finnish author Nenye and others stated in 2016 that at least 305,000 were confirmed dead, or missing, according to the latest research and the number of wounded certainly exceeded 500,000. The number of Soviet prisoners of war in Finland was estimated by Finnish historians to be around 64,000, 56,000 of whom were captured in 1941. Around 2,600 to 2,800 Soviet prisoners of war were rendered to Germany in exchange for roughly 2,200 Finnic prisoners of war. Of the Soviet prisoners, at least 18,318 were documented to have died in Finnish prisoner of war camps. The extent of Finland's participation in the Siege of Leningrad, and whether Soviet civilian casualties during the siege should be attributed to the Continuation War, is debated and lacks a consensus (estimates of civilian deaths during the siege range from 632,253 to 1,042,000). Of material losses, authors Jowett and Snodgrass state that 697 Soviet tanks were destroyed. Captured 842 Field artillery pieces and airplanes destroyed by Finnish fighter planes 1600, 1030 by AA and 75 by Navy.
https://en.wikipedia.org/wiki?curid=7712
Chinese remainder theorem In number theory, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer "n" by several integers, then one can determine uniquely the remainder of the division of "n" by the product of these integers, under the condition that the divisors are pairwise coprime. The earliest known statement of the theorem is by the Chinese mathematician Sun-tzu in the "Sun-tzu Suan-ching" in the 3rd century AD. The Chinese remainder theorem is widely used for computing with large integers, as it allows replacing a computation for which one knows a bound on the size of the result by several similar computations on small integers. The Chinese remainder theorem (expressed in terms of congruences) is true over every principal ideal domain. It has been generalized to any commutative ring, with a formulation involving ideals. The earliest known statement of the theorem, as a problem with specific numbers, appears in the 3rd-century book "Sun-tzu Suan-ching" by the Chinese mathematician Sun-tzu: Sun-tzu's work contains neither a proof nor a full algorithm. What amounts to an algorithm for solving this problem was described by Aryabhata (6th century). Special cases of the Chinese remainder theorem were also known to Brahmagupta (7th century), and appear in Fibonacci's Liber Abaci (1202). The result was later generalized with a complete solution called "Ta-yan-shu" () in Ch'in Chiu-shao's 1247 "Mathematical Treatise in Nine Sections" (, "Shu-shu Chiu-chang") which was translated into English in early 19th century by British missionary Alexander Wylie. The notion of congruences was first introduced and used by Gauss in his "Disquisitiones Arithmeticae" of 1801. Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, "to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction." Gauss introduces a procedure for solving the problem that had already been used by Euler but was in fact an ancient method that had appeared several times. Let "n"1, ..., "n""k" be integers greater than 1, which are often called "moduli" or "divisors". Let us denote by "N" the product of the "n""i". The Chinese remainder theorem asserts that if the "n""i" are pairwise coprime, and if "a"1, ..., "a""k" are integers such that 0 ≤ "a""i" < "n""i" for every "i", then there is one and only one integer "x", such that 0 ≤ "x" < "N" and the remainder of the Euclidean division of "x" by "n""i" is "a""i" for every "i". This may be restated as follows in term of congruences: If the "n""i" are pairwise coprime, and if "a"1, ..., "a""k" are any integers, then there exist integers "x" such that and any two solutions, say "x"1 and "x"2, are congruent modulo "N", that is, . In abstract algebra, the theorem is often restated as: if the "n""i" are pairwise coprime, the map defines a ring isomorphism between the ring of integers modulo "N" and the direct product of the rings of integers modulo the "n""i". This means that for doing a sequence of arithmetic operations in formula_4 one may do the same computation independently in each formula_5 and then get the result by applying the isomorphism (from the right to the left). This may be much faster than the direct computation if "N" and the number of operations are large. This is widely used, under the name "multi-modular computation", for linear algebra over the integers or the rational numbers. The theorem can also be restated in the language of combinatorics as the fact that the infinite arithmetic progressions of integers form a Helly family. The existence and the uniqueness of the solution may be proven independently. However, the first proof of existence, given below, uses this uniqueness. Suppose that and are both solutions to all the congruences. As and give the same remainder, when divided by , their difference is a multiple of each . As the are pairwise coprime, their product divides also , and thus and are congruent modulo . If and are supposed to be non negative and less than (as in the first statement of the theorem), then their difference may be a multiple of only if . The map maps congruence classes modulo to sequences of congruence classes modulo . The proof of uniqueness shows that this map is injective. As the domain and the codomain of this map have the same number of elements, the map is also surjective, which proves the existence of the solution. This proof is very simple but does not provide any direct way for computing a solution. Moreover, it cannot be generalized to other situations where the following proof can. Existence may be established by an explicit construction of . This construction may be split into two steps, firstly by solving the problem in the case of two moduli, and the second one by extending this solution to the general case by induction on the number of moduli. We want to solve the system: where formula_8 and formula_9 are coprime. Bézout's identity asserts the existence of two integers formula_10 and formula_11 such that The integers formula_10 and formula_11 may be computed by the extended Euclidean algorithm. A solution is given by Indeed, implying that formula_17 The second congruence is proved similarly. Consider a sequence of congruence equations: where the formula_19 are pairwise coprime. The two first equations have a solution formula_20 provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation As the other formula_19 are coprime with formula_23 this reduces solving the initial problem of equations to a similar problem with formula_24 equations. Iterating the process, one gets eventually the solutions of the initial problem. For constructing a solution, it is not necessary to make an induction on the number of moduli. However, such a direct construction involves more computation with large numbers, which makes it less efficient and less used. Nevertheless, Lagrange interpolation is a special case of this construction, applied to polynomials instead of integers. Let formula_25 be the product of all moduli but one. As the formula_19 are pairwise coprime, formula_27 and formula_19 are coprime. Thus Bézout's identity applies, and there exist integers formula_29 and formula_30 such that A solution of the system of congruences is In fact, as formula_33 is a multiple of formula_19 for formula_35 we have for every formula_37 Consider a system of congruences: where the formula_19 are pairwise coprime, and let formula_40 In this section several methods are described for computing the unique solution for formula_41, such that formula_42 and these methods are applied on the example: It is easy to check whether a value of is a solution: it suffices to compute the remainder of the Euclidean division of by each . Thus, to find the solution, it suffices to check successively the integers from to until finding the solution. Although very simple this method is very inefficient: for the simple example considered here, integers (including ) have to be checked for finding the solution, which is . This is an exponential time algorithm, as the size of the input is, up to a constant factor, the number of digits of , and the average number of operations is of the order of . Therefore, this method is rarely used, neither for hand-written computation nor on computers. The search of the solution may be made dramatically faster by sieving. For this method, we suppose, without loss of generality, that formula_44 (if it were not the case, it would suffice to replace each formula_45 by the remainder of its division by formula_19). This implies that the solution belongs to the arithmetic progression By testing the values of these numbers modulo formula_48 one eventually finds a solution formula_49 of the two first congruences. Then the solution belongs to the arithmetic progression Testing the values of these numbers modulo formula_51, and continuing until every modulus has been tested gives eventually the solution. This method is faster if the moduli have been ordered by decreasing value, that is if formula_52 For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4, , , ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding at each step, and computing only the remainders by 3. This gives This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has an exponential time complexity and is therefore not used on computers. The constructive existence proof shows that, in the case of two moduli, the solution may be obtained by the computation of the Bézout coefficients of the moduli, followed by a few multiplications, additions and reductions modulo formula_53 (for getting a result in the interval formula_54). As the Bézout's coefficients may be computed with the extended Euclidean algorithm, the whole computation, at most, has a quadratic time complexity of formula_55 where formula_56 denotes the number of digits of formula_57 For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers. Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is algorithms working in quasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time. On the current example (which has only three moduli), both strategies are identical and work as follows. Bézout's identity for 3 and 4 is Putting this in the formula given for proving the existence gives for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of . One may continue with any of these solutions, but the solution is smaller (in absolute value) and thus leads probably to an easier computation Bézout identity for 5 and 3×4 = 12 is Applying the same formula again, we get a solution of the problem: The other solutions are obtained by adding any multiple of , and the smallest positive solution is . The system of congruences solved by the Chinese remainder theorem may be rewritten as a system of simultaneous linear Diophantine equations: where the unknown integers are formula_41 and the formula_64 Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of the matrix of the system to Smith normal form or Hermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use of Bézout's identity. In , the Chinese remainder theorem has been stated in three different ways: in terms of remainders, of congruences, and of a ring isomorphism. The statement in terms of remainders does not apply, in general, to principal ideal domains, as remainders are not defined in such rings. However, the two other versions make sense over a principal ideal domain : it suffices to replace "integer" by "element of the domain" and formula_65 by . These two versions of the theorem are true in this context, because the proofs (except for the first existence proof), are based on Euclid's lemma and Bézout's identity, which are true over every principal domain. However, in general, the theorem is only an existence theorem and does not provide any way for computing the solution, unless one has an algorithm for computing the coefficients of Bézout's identity. The statement in terms of remainders given in cannot be generalized to any principal ideal domain, but its generalization to Euclidean domains is straightforward. The univariate polynomials over a field is the typical example of a Euclidean domain, which is not the integers. Therefore, we state the theorem for the case of a ring of univariate domain formula_66 over a field formula_67 For getting the theorem for a general Euclidean domain, it suffices to replace the degree by the Euclidean function of the Euclidean domain. The Chinese remainder theorem for polynomials is thus: Let formula_68 (the moduli) be, for , pairwise coprime polynomials in formula_66. Let formula_70 be the degree of formula_68, and formula_72 be the sum of the formula_73 If formula_74 are polynomials such that formula_75 or formula_76 for every , then, there is one and only one polynomial formula_77, such that formula_78 and the remainder of the Euclidean division of formula_77 by formula_68 is formula_81 for every . The construction of the solution may be done as in or . However, the latter construction may be simplified by using, as follows, partial fraction decomposition instead of extended Euclidean algorithm. Thus, we want to find a polynomial formula_77, which satisfies the congruences for formula_84 Consider the polynomials The partial fraction decomposition of formula_86 gives polynomials formula_87 with degrees formula_88 such that and thus Then a solution of the simultaneous congruence system is given by the polynomial In fact, we have for formula_93 This solution may have a degree larger that formula_94 The unique solution of degree less than formula_72 may be deduced by considering the remainder formula_96 of the Euclidean division of formula_97 by formula_98 This solution is A special case of Chinese remainder theorem for polynomials is Lagrange interpolation. For this, consider monic polynomials of degree one: They are pairwise coprime if the formula_101 are all different. The remainder of the division by formula_68 of a polynomial formula_77 is formula_104 Now, let formula_105 be constants (polynomials of degree 0) in formula_67 Both Lagrange interpolation and Chinese remainder theorem assert the existence of a unique polynomial formula_107 of degree less than formula_108 such that for every formula_37 Lagrange interpolation formula is exactly the result, in this case, of the above construction of the solution. More precisely, let The partial fraction decomposition of formula_112 is In fact, reducing the right-hand side to a common denominator one gets and the numerator is equal to one, as being a polynomial of degree less than formula_115 which takes the value one for formula_108 different values of formula_117 Using the above general formula, we get the Lagrange interpolation formula: Hermite interpolation is an application of the Chinese remainder theorem for univariate polynomials, which may involve moduli of arbitrary degrees (Lagrange interpolation involves only moduli of degree one). The problem consists of finding a polynomial of the least possible degree, such that the polynomial and its first derivatives take given values at some fixed points. More precisely, let formula_119 be formula_108 elements of the ground field formula_121 and, for formula_122 let formula_123 be the values of the first formula_124 derivatives of the sought polynomial at formula_101 (including the 0th derivative, which is the value of the polynomial itself). The problem is to find a polynomial formula_77 such that its "j"th derivative takes the value formula_127 at formula_128 for formula_129 and formula_130 Consider the polynomial This is the Taylor polynomial of order formula_132 at formula_101, of the unknown polynomial formula_134 Therefore, we must have Conversely, any polynomial formula_136 that satisfies these formula_108 congruences, in particular verifies, for any formula_138 therefore formula_68 is its Taylor polynomial of order formula_141 at formula_101, that is, formula_77 solves the initial Hermite interpolation problem. The Chinese remainder theorem asserts that there exists exactly one polynomial of degree less than the sum of the formula_144 which satisfies these formula_108 congruences. There are several ways for computing the solution formula_134 One may use the method described at the beginning of . One may also use the constructions given in or . The Chinese remainder theorem can be generalized to non-coprime moduli. Let formula_147 be any integers, let formula_148, and consider the system of congruences: If formula_150, then this system of equations has a unique solution modulo formula_151. Otherwise, it has no solutions. If we use Bézout's identity to write formula_152, then the solution is This defines an integer, as divides both and . Otherwise, the proof is very similar to that for coprime moduli. The Chinese remainder theorem can be generalized to any ring, by using coprime ideals (also called comaximal ideals). Two ideals and are coprime if there are elements formula_154 and formula_155 such that formula_156 This relation plays the role of Bézout's identity in the proofs related to this generalization, which, otherwise are very similar. The generalization may be stated as follows. Let be two-sided ideals of a ring formula_157 and let be their intersection. If the ideals are pairwise coprime, we have the isomorphism: between the quotient ring formula_159 and the direct product of the formula_160 where "formula_161" denotes the image of the element formula_41 in the quotient ring defined by the ideal formula_163 Moreover, if formula_157 is commutative, then the ideal intersection of pairwise coprime ideals is equal to their product; that is if and are coprime for . The Chinese remainder theorem has been used to construct a Gödel numbering for sequences, which is involved in the proof of Gödel's incompleteness theorems. The prime-factor FFT algorithm (also called Good-Thomas algorithm) uses the Chinese remainder theorem for reducing the computation of a fast Fourier transform of size formula_53 to the computation of two fast Fourier transforms of smaller sizes formula_8 and formula_9 (providing that formula_8 and formula_9 are coprime). Most implementations of RSA use the Chinese remainder theorem during signing of HTTPS certificates and during decryption. The Chinese remainder theorem can also be used in secret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered. Secret sharing using the Chinese remainder theorem uses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certain cardinality. The range ambiguity resolution techniques used with medium pulse repetition frequency radar can be seen as a special case of the Chinese remainder theorem. Dedekind's theorem on the linear independence of characters. Let be a monoid and an integral domain, viewed as a monoid by considering the multiplication on . Then any finite family of distinct monoid homomorphisms is linearly independent. In other words, every family of elements satisfying must be equal to the family . Proof. First assume that is a field, otherwise, replace the integral domain by its quotient field, and nothing will change. We can linearly extend the monoid homomorphisms to -algebra homomorphisms , where is the monoid ring of over . Then, by linearity, the condition yields Next, for the two -linear maps and are not proportional to each other. Otherwise and would also be proportional, and thus equal since as monoid homomorphisms they satisfy: , which contradicts the assumption that they are distinct. Therefore, the kernels and are distinct. Since is a field, is a maximal ideal of for every . Because they are distinct and maximal the ideals and are coprime whenever . The Chinese Remainder Theorem (for general rings) yields an isomorphism: where Consequently, the map is surjective. Under the isomorphisms the map corresponds to: Now, yields for every vector in the image of the map . Since is surjective, this means that for every vector Consequently, . QED.
https://en.wikipedia.org/wiki?curid=7713
Cyril M. Kornbluth Cyril M. Kornbluth (July 2, 1923 – March 21, 1958) was an American science fiction author and a member of the Futurians. He used a variety of pen-names, including Cecil Corwin, S. D. Gottesman, Edward J. Bellin, Kenneth Falconer, Walter C. Davies, Simon Eisner, Jordan Park, Arthur Cooke, Paul Dennis Lavond, and Scott Mariner. The "M" in Kornbluth's name may have been in tribute to his wife, Mary Byers; Kornbluth's colleague and collaborator Frederik Pohl confirmed Kornbluth's lack of any actual middle name in at least one interview. Kornbluth was born and grew up in the uptown Manhattan neighborhood of Inwood, in New York City. He was of Polish Jewish descent, the son of a "second-generation [American] Jew" who ran his own tailor shop. According to his widow, Kornbluth was a "precocious child", learning to read by the age of three and writing his own stories by the time he was seven. He graduated from high school at thirteen, received a CCNY scholarship at fourteen, and was "thrown out for leading a student strike" without graduating. As a teenager, he became a member of the Futurians, an influential group of science fiction fans and writers. While a member of the Futurians, he met and became friends with Frederik Pohl, Donald A. Wollheim, Robert A. W. Lowndes, and his future wife Mary Byers. He also participated in the Fantasy Amateur Press Association. Kornbluth served in the US Army during World War II (European 'Theatre'). He received a Bronze Star for his service in the Battle of the Bulge, where he served as a member of a heavy machine gun crew. Upon his discharge, he returned to finish his education at the University of Chicago under the G.I. Bill. While living in Chicago he also worked at Trans-Radio Press, a news wire service. In 1951 he started writing full-time, returning to the East Coast where he collaborated on novels with his old Futurian friends Frederik Pohl and Judith Merril. Kornbluth began writing at 15. His first solo story, "The Rocket of 1955", was published in Richard Wilson's fanzine "Escape" (Vol. 1, No 2, August 1939); his first collaboration, "Stepsons of Mars," written with Richard Wilson and published under the name "Ivar Towers", appeared in the April 1940 "Astonishing". His other short fiction includes "The Little Black Bag", "The Marching Morons", "The Altar at Midnight", "MS. Found in a Chinese Fortune Cookie", "Gomez" and "The Advent on Channel 12". "The Little Black Bag" was first adapted for television live on the television show "Tales of Tomorrow" on May 30, 1952. It was later adapted for television by the BBC in 1969 for its "Out of the Unknown" series. In 1970, the same story was adapted by Rod Serling for an episode of his "Night Gallery" series. This dramatization starred Burgess Meredith as the alcoholic Dr. William Fall, who had long lost his doctor's license and become a homeless alcoholic. He finds a bag containing advanced medical technology from the future, which, after an unsuccessful attempt to pawn it, he uses benevolently. "The Marching Morons" is a look at a far future in which the world's population consists of five billion idiots and a few million geniuses – the precarious minority of the "elite" working desperately to keep things running behind the scenes. In his introduction to "The Best of C.M. Kornbluth", Pohl states that "The Marching Morons" is a direct sequel to "The Little Black Bag": it is easy to miss this, as "Bag" is set in the contemporary present while "Morons" takes place several centuries from now, and there is no character who appears in both stories. The titular black bag in the first story is actually an artifact from the time period of "The Marching Morons": a medical kit filled with self-driven instruments enabling a far-future moron to "play doctor". A future Earth similar to "The Marching Morons" – a civilisation of morons protected by a small minority of hidden geniuses – is used again in the final stages of Kornbluth & Pohl's "Search the Sky". "MS. Found in a Chinese Fortune Cookie" (1957) is supposedly written by Kornbluth using notes by "Cecil Corwin", who has been declared insane and incarcerated, and who smuggles out in fortune cookies the ultimate secret of life. This fate is said to be Kornbluth's response to the unauthorized publication of "Mask of Demeter" (as by "Corwin" and "Martin Pearson" (Donald A. Wollheim)) in Wollheim's anthology "Prize Science Fiction" in 1953. Biographer Mark Rich describes the 1958 story "Two Dooms" as one of several stories which are "concern[ed] with the ethics of theoretical science" and which "explore moral quandaries of the atomic age": Many of Kornbluth's novels were written as collaborations: either with Judith Merril (using the pseudonym Cyril Judd), or with Frederik Pohl. These include "Gladiator-At-Law" and "The Space Merchants". "The Space Merchants" contributed significantly to the maturing and to the wider academic respectability of the science fiction genre, not only in America but also in Europe. Kornbluth also wrote several novels under his own name, including "The Syndic" and "Not This August". Kornbluth died at age 34 in Levittown, New York. Scheduled to meet with Bob Mills in New York City to interview for the position of editor of "The Magazine of Fantasy & Science Fiction", Kornbluth had to shovel snow from his driveway, which delayed him. Running to meet his train, he suffered a fatal heart attack on the platform of the station. A number of short stories remained unfinished at Kornbluth's death; these were eventually completed and published by Pohl. One of these stories, "The Meeting" ("The Magazine of Fantasy & Science Fiction", November 1972), was the co-winner of the 1973 Hugo Award for Best Short Story; it tied with R. A. Lafferty's "Eurema's Dam." Almost all of Kornbluth's solo SF stories have been collected as "His Share of Glory: The Complete Short Science Fiction of C. M. Kornbluth" (NESFA Press, 1997). Frederik Pohl, in his autobiography "The Way the Future Was", Damon Knight, in his memoir "The Futurians", and Isaac Asimov, in his memoirs "In Memory Yet Green" and "I. Asimov: A Memoir", all give descriptions of Kornbluth as a man of odd personal habits and eccentricities. Kornbluth, for example, decided to educate himself by reading his way through an entire encyclopedia from A to Z; in the course of this effort, he acquired a great deal of esoteric knowledge that found its way into his stories, in alphabetical order by subject. When Kornbluth wrote a story that mentioned the "ballista", an Ancient Roman weapon, Pohl knew that Kornbluth had finished the 'A's and had started on the 'B's. According to Pohl, Kornbluth never brushed his teeth, and they were literally green. Deeply embarrassed by this, Kornbluth developed the habit of holding his hand in front of his mouth when speaking. Kornbluth disliked black coffee, but felt obliged to acquire a taste for it because he believed that professional authors were "supposed to" drink black coffee. He trained himself by putting gradually less cream into each cup of coffee he drank, until he eventually "weaned himself" (Knight's description) and switched to black coffee. Spider Robinson praised this collection, saying "I haven't enjoyed a book so much in years." Mark Rich wrote, "Critics judging Kornbluth by this anthology, edited by Pohl, have seen a growing bitterness in his later stories. This reflects editorial choice more than reality, because Kornbluth also wrote delightful humor in his last years, in stories not collected here. These tales demonstrate Kornbluth's effective use of everyday individuals from a variety of ethnic backgrounds as well as his well-tuned ear for dialect." Kornbluth's name is mentioned in Lemony Snicket's "Series of Unfortunate Events" as a member of V.F.D., a secret organization dedicated to the promotion of literacy, classical learning, and crime prevention.
https://en.wikipedia.org/wiki?curid=7716
Coprophagia Coprophagia () or coprophagy () is the consumption of feces. The word is derived from the Greek κόπρος ', "feces" and φαγεῖν ', "to eat". Coprophagy refers to many kinds of feces-eating, including eating feces of other species (heterospecifics), of other individuals (allocoprophagy), or one's own (autocoprophagy) – those once deposited or taken directly from the anus. In humans, coprophagia has been described since the late 19th century in individuals with mental illnesses and in unconventional sexual acts. Some animal species eat feces as a normal behavior, in particular lagomorphs, which do so to allow tough plant materials to be digested more thoroughly by passing twice through the digestive tract. Other species may eat feces under certain conditions. "Ttongsul", or "feces wine", has been used in old Korean medicine. Ideally, a child's excrement is used in the preparation with alcohol content up to 9% by volume. Centuries ago, physicians tasted their patients' feces, to better judge their state and condition. Lewin reported, "... consumption of fresh, warm camel feces has been recommended by Bedouins as a remedy for bacterial dysentery; its efficacy (probably attributable to the antibiotic subtilisin from "Bacillus subtilis") was anecdotally confirmed by German soldiers in Africa during World War II". Coprophilia is a paraphilia (DSM-5), where the object of sexual interest is feces, and may be associated with coprophagia. Coprophagia is sometimes depicted in pornography, usually under the term "scat" (from scatology). A notorious example of this is the pornographic shock video "2 Girls 1 Cup". "The 120 Days of Sodom", a 1785 novel by Marquis de Sade, is full of detailed descriptions of erotic sadomasochistic coprophagia. Austrian actor and pornographic director created the series "Avantgarde Extreme" and "Portrait Extrem", which explores coprophagy, coprophilia, and urolagnia. GG Allin, an American shock rock singer-songwriter, often featured coprophagy in his performances. Coprophagia has also been observed in some people with schizophrenia and pica. François Rabelais, in his classic "Gargantua and Pantagruel", often employs the expression "mâche-merde" or "mâchemerde", meaning "shit-chewer". This, in turn, comes from the Greek comedians Aristophanes and particularly Menander, who often use the term "skatophagos" (σκατοφάγος). Thomas Pynchon's award-winning 1973 novel "Gravity's Rainbow" contains a detailed scene of coprophagia. Modern Russian writer Vladimir Sorokin's novel "Norma" describes a society where coprophagia is institutionalized and mandatory. In the Adult Swim show "Rick and Morty", coprophagia is an issue with which the family therapist assists other patients. In the episode "Pickle Rick", the eccentric scientist Rick Sanchez turns himself into a pickle just as his family and he are scheduled to attend a therapy session. Before the session, the family is waiting outside when a teacher at Rick's grandchildren's school comes out and asks how long the family has been eating poop. On the door to the office, the doctor's sign reads "Dr. Wong Family Therapist Coprophagia Recovery". During the session, Rick's grandson Morty notices a binder on the therapist's coffee table. Morty decides to pick it up and look inside. Inside the binder are pictures of people eating poop (the audience never sees these images) as Morty asks with a repulsed look as to why those pictures are in the binder. When the family leaves, the doctor asks them if they know anybody with issues eating poop to refer them to her. Coprophagous insects consume and redigest the feces of large animals. These feces contain substantial amounts of semidigested food, particularly in the case of herbivores, owing to the inefficiency of the large animals' digestive systems. Thousands of species of coprophagous insects are known, especially among the orders Diptera and Coleoptera. Examples of such fies are "Scathophaga stercoraria" and "Sepsis cynipsea", dung flies commonly found in Europe around cattle droppings. Among beetles, dung beetles are a diverse lineage, many of which feed on the microorganism-rich liquid component of mammals' dung, and lay their eggs in balls composed mainly of the remaining fibrous material. Termites eat one another's feces as a means of obtaining their hindgut protists. Termites and protists have a symbiotic relationship (e.g. with the protozoan that allows the termites to digest the cellulose in their diet). For example, in one group of termites, a three-way symbiotic relationship exists; termites of the family Rhinotermitidae, cellulolytic protists of the genus "Pseudotrichonympha" in the guts of these termites, and intracellular bacterial symbionts of the protists. Domesticated and wild mammals are sometimes coprophagic, and in some species, this forms an essential part of their method of digesting tough plant material. Dogs may be coprophagic, possibly to rebalance their microbiome or to ingest missing nutrients. Species within the Lagomorpha (rabbits, hares, and pikas) produce two types of fecal pellets: hard ones, and soft ones called cecotropes. Animals in these species reingest their cecotropes, to extract further nutrients. Cecotropes derive from chewed plant material that collects in the cecum, a chamber between the large and small intestine, containing large quantities of symbiotic bacteria that help with the digestion of cellulose and also produce certain B vitamins. After excretion of the soft cecotrope, it is again eaten whole by the animal and redigested in a special part of the stomach. The pellets remain intact for up to six hours in the stomach; the bacteria within continue to digest the plant carbohydrates. This double-digestion process enables these animals to extract nutrients that they may have missed during the first passage through the gut, as well as the nutrients formed by the microbial activity. This process serves the same purpose within these animals as rumination (cud-chewing) does in cattle and sheep. Cattle in the United States are often fed chicken litter. Concerns have arisen that the practice of feeding chicken litter to cattle could lead to bovine spongiform encephalopathy (mad-cow disease) because of the crushed bone meal in chicken feed. The U.S. Food and Drug Administration regulates this practice by attempting to prevent the introduction of any part of cattle brain or spinal cord into livestock feed. Other countries, such as Canada, have banned chicken litter for use as a livestock feed. The young of elephants, giant pandas, koalas, and hippos eat the feces of their mothers or other animals in the herd, to obtain the bacteria required to properly digest vegetation found in their ecosystems. When such animals are born, their intestines are sterile and do not contain these bacteria. Without doing this, they would be unable to obtain any nutritional value from plants. Hamsters, guinea pigs, chinchillas, hedgehogs, and naked mole-rats eat their own droppings, which are thought to be a source of vitamins B and K, produced by gut bacteria. On rare occasions gorillas have been observed consuming their feces, possibly out of boredom, a desire for warm food, or to reingest seeds contained in the feces. Some carnivorous plants, such as pitcher plants of the genus "Nepenthes", obtain nourishment from the feces of commensal animals.
https://en.wikipedia.org/wiki?curid=7720
C. L. Moore Catherine Lucille Moore (January 24, 1911 – April 4, 1987) was an American science fiction and fantasy writer, who first came to prominence in the 1930s writing as C. L. Moore. She was among the first women to write in the science fiction and fantasy genres, though earlier woman writers in these genres include Clare Winger Harris, Greye La Spina, and Francis Stevens, amongst others. Nevertheless, Moore's work paved the way for many other female speculative fiction writers. Moore married her first husband Henry Kuttner in 1940, and most of her work from 1940 to 1958 (Kuttner's death) was written by the couple collaboratively. They were prolific co-authors under their own names, although more often under any one of several pseudonyms. As "Catherine Kuttner", she had a brief career as a television scriptwriter from 1958 to 1962. She retired from writing in 1963. Moore was born on January 24, 1911 in Indianapolis, Indiana. She was chronically ill as a child and spent much of her time reading literature of the fantastic. She left college during the Great Depression to work as a secretary at the Fletcher Trust Company in Indianapolis. "The Vagabond", a student-run magazine at Indiana University, published three of her stories when she was a student there. The three short stories, all with a fantasy theme and all credited to "Catherine Moore", appeared in 1930/31. Her first professional sales appeared in pulp magazines beginning in 1933. Her decision to publish under the name "C.L. Moore" stemmed not from a desire to hide her gender, but to keep her employers at Fletcher Trust from knowing that she was working as a writer on the side. Her early work included two significant series in "Weird Tales", then edited by Farnsworth Wright. One features the rogue and adventurer Northwest Smith wandering through the Solar System; the other features the swordswoman/warrior Jirel of Joiry, one of the first female protagonists in sword-and-sorcery fiction. Both series are sometimes named for their lead characters. One of the Northwest Smith stories, "Nymph of Darkness" ("Fantasy Magazine" (April 1935); expurgated version, "Weird Tales" (Dec 1939)), was written in collaboration with Forrest J Ackerman. The most famous Northwest Smith story is "Shambleau", which was also Moore's first professional sale. It originally appeared in the November 1933 issue of "Weird Tales", netting her $100, and later becoming a popular anthology reprint. Her most famous Jirel story is also the first one, "Black God's Kiss", which was the cover story in the October 1934 issue of "Weird Tales", subtitled "the weirdest story ever told" (see figure). Moore's early stories were notable for their emphasis on the senses and emotions, which was unusual in genre fiction at the time. Moore's work also appeared in "Astounding Science Fiction" magazine throughout the 1940s. Several stories written for that magazine were later collected in her first published book, "Judgment Night" (1952) One of them, the novella "No Woman Born" (1944), was to be included in more than 10 different science fiction anthologies including "The Best of C. L. Moore". Included in that collection were "Judgment Night" (first published in August and September 1943), the lush rendering of a future galactic empire with a sober meditation on the nature of power and its inevitable loss; "The Code" (July 1945), an homage to the classic Faust with modern theories and Lovecraftian dread; "Promised Land" (February 1950) and "Heir Apparent" (July 1950), both documenting the grim twisting that mankind must undergo in order to spread into the Solar System; and "Paradise Street" (September 1950), a futuristic take on the Old West conflict between lone hunter and wilderness-taming settlers. Moore met Henry Kuttner, also a science fiction writer, in 1936 when he wrote her a fan letter under the impression that "C. L. Moore" was a man. They soon collaborated on a story that combined Moore's signature characters, Northwest Smith and Jirel of Joiry: "Quest of the Starstone" (1937). Moore and Kuttner married in 1940 and thereafter wrote many of their stories in collaboration, sometimes under their own names, but more often using the joint pseudonyms C. H. Liddell, Lawrence O'Donnell, or Lewis Padgett — most commonly the latter, a combination of their mothers' maiden names. Moore still occasionally wrote solo work during this period, including the frequently anthologized "No Woman Born" (1944). A selection of Moore's solo short fiction work from 1942 through 1950 was collected in 1952's "Judgement Night". Moore's only solo novel, "Doomsday Morning" appeared in 1957. The vast majority of Moore's work in the period, though, was written as part of a very prolific partnership. Working together, the couple managed to combine Moore's style with Kuttner's more cerebral storytelling. They continued to work in sf and fantasy, and their works include two frequently anthologized sf classics: "Mimsy Were the Borogoves" (February 1943), the basis for the film "The Last Mimzy" (2007), and "Vintage Season" (September 1946), the basis for the film "Timescape" (1992). As "Lewis Padgett" they also penned two mystery novels: "The Brass Ring" (1946) and "The Day He Died" (1947). After Kuttner's death in 1958, Moore continued teaching her writing course at the University of Southern California but permanently retired from writing any further literary fiction. Instead, working as "Catherine Kuttner", she carved out a short-lived career as a scriptwriter for Warner Brothers television, writing episodes of the westerns "Sugarfoot", "Maverick", and "The Alaskans", as well as the detective series "77 Sunset Strip", all between 1958 and 1962. However, upon marrying Thomas Reggie (who was not a writer) in 1963, she ceased writing entirely. Moore was the author guest of honor at Kansas City, MO's fantasy and science fiction convention BYOB-Con 6, held over the U. S. Memorial Day weekend in May, 1976. In a 1979 interview she told that she and a writer friend were collaborating on a fantasy story, and how it could possibly form the basis of a new series. But nothing was ever published. In 1981, Moore received two annual awards for her career in fantasy literature: the World Fantasy Award for Life Achievement, chosen by a panel of judges at the World Fantasy Convention, and the Gandalf Grand Master Award, chosen by vote of participants in the World Science Fiction Convention. (Thus she became the eighth and final Grand Master of Fantasy, sponsored by the Swordsmen and Sorcerers' Guild of America, in partial analogy to the Grand Master of Science Fiction sponsored by the Science Fiction Writers of America.) Moore was an active member of the Tom and Terri Pinckard Science Fiction literary salon and a frequent contributor to literary discussions with the regular membership, including Robert Bloch, George Clayton Johnson, Larry Niven, Jerry Pournelle, Norman Spinrad, A. E. van Vogt, and others, as well as many visiting writers and speakers. She developed Alzheimer's disease but that was not obvious for several years. She had ceased to attend the meetings when she was nominated to be the first woman Grand Master of the Science Fiction Writers of America; the nomination was withdrawn at the request of her husband, Thomas Reggie, who said the award and ceremony would be at best confusing and likely upsetting to her, given the progress of her disease. That caused dismay among the former SFWA presidents, for she was a great favorite to receive the award. (Former presidents and current officers select a living writer as Grand Master of SF, no more than one annually.) Moore died on April 4, 1987 at her home in Hollywood, California after a long battle with Alzheimer's. 1981 World Fantasy Convention Lifetime Achievement Award 1981 Gandalf Grand Master Award 1998 Posthumous induction into the Science Fiction and Fantasy Hall of Fame 2004 Cordwainer Smith Rediscovery Award
https://en.wikipedia.org/wiki?curid=7721
Compactron Compactrons are a type of thermionic valve, or vacuum tube, which contain multiple electrode structures packed into a single enclosure. They were designed to compete with early transistor electronics and were used in televisions, radios, and similar roles. The Compactron was a trade name applied to multi-electrode structure tubes specifically constructed on a 12-pin Duodecar base. This vacuum tube family was introduced in 1961 by General Electric in Owensboro, Kentucky to compete with transistorized electronics during the solid state transition. Television sets were a primary application. The idea of multi-electrode tubes itself was far from new and indeed the Loewe company of Germany was producing multi-electrode tubes as far back as 1926, and they even included all of the required passive components as well. Use was prevalent in televisions because transistors were slow to achieve the high power and frequency capabilities needed particularly in color television sets. The first portable color television, the General Electric Porta-Color, was designed using 13 tubes, 10 of which were Compactrons. Even before the compactron design was unveiled, nearly all tube based electronic equipment used multi-electrode tubes of one type or another. Virtually every AM/FM radio receiver of the 1950s and 60's used a 6AK8 (EABC80) tube (or equivalent) consisting of three diodes and a triode which was designed in 1954. Compactron's integrated valve design helped lower power consumption and heat generation (they were to tubes what integrated circuits were to transistors). Compactrons were also used in a few high end Hi-Fi stereos. They were also used by the Ampeg guitar amplifier company in some of their guitar amps. No modern tube based Hi-Fi systems are known to use this tube type, as simpler and more readily available tubes have again filled this niche. One tube, the 7868, is used in some Hi-Fi systems made today. This tube is a Novar tube. It has the same physical dimensions as the compactron, but a 9 pin base. The exhaust tip is on the top or bottom of the tube, depending on the manufacturer's preference. It is currently in production by Electro-Harmonix.(The new power amp, Linear Tube Audio's Ultralinear, uses 4 17JN6 compactron tubes as the power tube in the amp.) The amp generates 20 watts of power with these inexpensive TV tubes. A distinguishing feature of most Compactrons is the placement of the evacuation tip on the bottom end, rather than the top end as was customary with "miniature" tubes, and a characteristic 3/4" diameter circle pin pattern. Examples of Compactrons type types include: Due to their specific applications in television circuits, many different Compactron types were produced. Almost all were assigned using standard US tube numbers. Integrated circuits (of the analogue and digital type) gradually took over all of the functions that the Compactron was designed for. "Hybrid" television sets produced in the early to mid-1970s made use of a combination of tubes (typically Compactrons), transistors, and integrated circuits in the same set. By the mid-1980s this type of tube was functionally obsolete. Compactrons simply don't exist in any TV sets designed after 1986. Other specialist uses of the tube declined in parallel with the television set manufacture. Manufacture of Compactrons ceased in the early 1990s. New old stock replacements for almost all Compactron types produced are easily found for sale on the Internet.
https://en.wikipedia.org/wiki?curid=7722
Carmichael number In number theory, a Carmichael number is a composite number formula_1 which satisfies the modular arithmetic congruence relation: for all integers formula_3 which are relatively prime to formula_1. They are named for Robert Carmichael. The Carmichael numbers are the subset "K"1 of the Knödel numbers. Equivalently, a Carmichael number is a composite number formula_1 for which for all integers formula_3. Fermat's little theorem states that if "p" is a prime number, then for any integer "b", the number "b" − "b" is an integer multiple of "p". Carmichael numbers are composite numbers which have this property. Carmichael numbers are also called Fermat pseudoprimes or absolute Fermat pseudoprimes. A Carmichael number will pass a Fermat primality test to every base "b" relatively prime to the number, even though it is not actually prime. This makes tests based on Fermat's Little Theorem less effective than strong probable prime tests such as the Baillie–PSW primality test and the Miller–Rabin primality test. However, no Carmichael number is either an Euler–Jacobi pseudoprime or a strong pseudoprime to every base relatively prime to it
https://en.wikipedia.org/wiki?curid=7723
Bo Diddley Ellas McDaniel (born Ellas Otha Bates; December 30, 1928 – June 2, 2008), known as Bo Diddley, was an American singer, guitarist, songwriter and music producer who played a key role in the transition from the blues to rock and roll. He influenced many artists, including Buddy Holly, Elvis Presley, the Beatles, the Rolling Stones, the Animals, and the Clash. His use of African rhythms and a signature beat, a simple five-accent hambone rhythm, is a cornerstone of hip hop, rock, and pop music. In recognition of his achievements, he was inducted into the Rock and Roll Hall of Fame in 1987, the Blues Hall of Fame in 2003, and the Rhythm and Blues Music Hall of Fame in 2017. He received a Lifetime Achievement Award from the Rhythm and Blues Foundation and the Grammy Lifetime Achievement Award. Diddley is also recognized for his technical innovations, including his distinctive rectangular guitar, with its unique booming, resonant, shimmering tones. Born in McComb, Mississippi, as Ellas Otha Bates, he was adopted and raised by his mother's cousin, Gussie McDaniel, whose surname he assumed. In 1934, the McDaniel family moved to the South Side of Chicago, where he dropped Otha from his name and became Ellas McDaniel. He was an active member of Chicago's Ebenezer Baptist Church, where he studied the trombone and the violin, becoming so proficient on the violin that the musical director invited him to join the orchestra. He performed until he was 18. However, he was more interested in the pulsating, rhythmic music he heard at a local Pentecostal Church and took up the guitar. Inspired by a performance by John Lee Hooker, he supplemented his income as a carpenter and mechanic by playing on street corners with friends, including Jerome Green (c. 1934–1973), in the Hipsters band, later renamed the Langley Avenue Jive Cats. Green became a near-constant member of McDaniel's backing band, the two often trading joking insults with each other during live shows. During the summers of 1943 and 1944, he played at the Maxwell Street market in a band with Earl Hooker. By 1951 he was playing on the street with backing from Roosevelt Jackson on washtub bass and Jody Williams, whom he had taught to play the guitar. Williams later played lead guitar on "Who Do You Love?" (1956). In 1951, he landed a regular spot at the 708 Club, on Chicago's South Side, with a repertoire influenced by Louis Jordan, John Lee Hooker, and Muddy Waters. In late 1954, he teamed up with harmonica player Billy Boy Arnold, drummer Clifton James and bass player Roosevelt Jackson and recorded demos of "I'm a Man" and "Bo Diddley". They re-recorded the songs at Chess Studios, with a backing ensemble comprising Otis Spann (piano), Lester Davenport (harmonica), Frank Kirkland (drums), and Jerome Green (maracas). The record was released in March 1955, and the A-side, "Bo Diddley", became a number one R&B hit. The origin of the stage name Bo Diddley is unclear. McDaniel claimed that his peers gave him the name, which he suspected was an insult. He also said that the name first belonged to a singer his adoptive mother knew. Harmonicist Billy Boy Arnold said that it was a local comedian's name, which Leonard Chess adopted as McDaniel's stage name and the title of his first single. McDaniel also stated that his school classmates in Chicago gave him the nickname, which he started using when sparring and boxing in the neighborhood with The Little Neighborhood Golden Gloves Bunch. A diddley bow is a homemade single-string instrument played mainly by farm workers in the South. It probably has influences from the West African coast. In the American slang term "bo diddly", "bo" is an intensifier and "diddly" is a truncation of "diddly squat", which means "absolutely nothing". On November 20, 1955, Diddley appeared on the popular television program "The Ed Sullivan Show". When someone on the show's staff overheard him casually singing "Sixteen Tons" in the dressing room, he was asked to perform the song on the show. Seeing "Bo Diddley" on the cue card, he thought he was to perform both his self-titled hit single and "Sixteen Tons". Sullivan was furious and banned Diddley from his show, reputedly saying that he wouldn't last six months. Chess Records included Diddley's cover of "Sixteen Tons" on the 1960 album "Bo Diddley Is a Gunslinger". Diddley's hit singles continued in the 1950s and 1960s: "Pretty Thing" (1956), "Say Man" (1959), and "You Can't Judge a Book by the Cover" (1962). He also released numerous albums, including "Bo Diddley Is a Gunslinger" and "Have Guitar, Will Travel". These bolstered his self-invented legend. Between 1958 and 1963, Checker Records released eleven full-length Bo Diddley albums. In the 1960s, he broke through as a crossover artist with white audiences (appearing at the Alan Freed concerts, for example), but he rarely aimed his compositions at teenagers. The album title "Surfing with Bo Diddley" derived from his influence on surf guitarists rather than surfing per se. In 1963, Diddley starred in a UK concert tour with the Everly Brothers and Little Richard along with the Rolling Stones (an unknown band at that time). He wrote many songs for himself and also for others. In 1956, he and guitarist Jody Williams co-wrote the pop song "Love Is Strange", a hit for Mickey & Sylvia in 1957. He also wrote "Mama (Can I Go Out)", which was a minor hit for the pioneering rockabilly singer Jo Ann Campbell, who performed the song in the 1959 rock and roll film "Go Johnny Go". After moving from Chicago to Washington, D.C., the basement of his home at 2614 Rhode Island Avenue NE housed his first home recording studio. Diddley's home studio was frequented by several of Washington, D.C.'s musical luminaries and the site where he recorded the commercially released album "Bo Diddley Is a Gunslinger". Diddley also produced and recorded Marvin Gaye (his valet), a member of the local Doo Wop group, the Marquees. Diddley co-wrote the Marquees' first single featuring Gaye titled "Wyatt Earp". It was released on Okeh Records, after the Chess brothers turned it down. During this time, Moonglows' founder Harvey Fuqua sang backing vocals on many of Diddley's home recordings. Gaye later joined the Moonglows and followed them to Motown. Diddley included women in his band: Norma-Jean Wofford, also known as The Duchess; Gloria Jolivet; Peggy Jones, also known as Lady Bo, a lead guitarist (rare for a woman at that time); and Cornelia Redmond, also known as Cookie V. Over the decades, Diddley's performing venues ranged from intimate clubs to stadiums. On March 25, 1972, he played with the Grateful Dead at the Academy of Music in New York City. The Grateful Dead released part of this concert as Volume 30 of the band's concert album series, "Dick's Picks". Also in the early 1970s, the soundtrack of the ground-breaking animated film "Fritz the Cat" contained his song "Bo Diddley", in which a crow idly finger-pops to the track. Diddley spent some years in New Mexico, living in Los Lunas from 1971 to 1978, while continuing his musical career. He served for two and a half years as a deputy sheriff in the Valencia County Citizens' Patrol; during that time he purchased and donated three highway-patrol pursuit cars. In the late 1970s, he left Los Lunas and moved to Hawthorne, Florida, where he lived on a large estate in a custom-made log cabin, which he helped to build. For the remainder of his life he divided his time between Albuquerque and Florida, living the last 13 years of his life in Archer, Florida, a small farming town near Gainesville. In 1979, he appeared as an opening act for The Clash on their US tour. In 1983, he starred as a Philadelphia pawn shop owner in the comedy film "Trading Places". In 1989, Diddley entered into a licensing agreement with the sportswear brand Nike. The Wieden & Kennedy produced commercial in the "Bo Knows" campaign, teamed Diddley with dual sportsman Bo Jackson, and resulted in one of the most iconic advertisements in advertising history. The agreement ended in 1991, but in 1999, a T-shirt of Diddley's image and "You don't know diddley" slogan was purchased in a Gainesville, Florida sports apparel store. Diddley felt that Nike should not continue to use the slogan or his likeness and fought Nike over the copyright infringement. Despite the fact that lawyers for both parties could not come to a renewed legal arrangement, Nike allegedly continued marketing the apparel and ignored cease-and-desist orders, and a lawsuit was filed on Diddley's behalf, in Manhattan Federal Court. In "Legends of Guitar" (filmed live in Spain in 1991), Diddley performed with B.B. King, Les Paul, Albert Collins, and George Benson, among others. He joined the Rolling Stones on their 1994 concert broadcast of "Voodoo Lounge", performing "Who Do You Love?". In 1996, he released "A Man Amongst Men", his first major label album (and his final studio album) with guest artists like Keith Richards, Ron Wood and the Shirelles. The album earned a Grammy Award nomination in 1997 for the Best Contemporary Blues Album category. Diddley performed a number of shows around the country in 2005 and 2006, with fellow Rock and Roll Hall of Famer Johnnie Johnson and his band, consisting of Johnson on keyboards, Richard Hunt on drums and Gus Thornton on bass. In 2006, he participated as the headliner of a grassroots-organized fundraiser concert to benefit the town of Ocean Springs, Mississippi, which had been devastated by Hurricane Katrina. The "Florida Keys for Katrina Relief" had originally been set for October 23, 2005, when Hurricane Wilma barreled through the Florida Keys on October 24, causing flooding and economic mayhem. In January 2006, the Florida Keys had recovered enough to host the fundraising concert to benefit the more hard-hit community of Ocean Springs. When asked about the fundraiser, Diddley stated, "This is the United States of America. We believe in helping one another". The all-star band formed by Charlie Tona, a long-time close personal friend of Bo Diddley, included members of the Soul Providers, and famed artists Clarence Clemons of the E Street Band, Joey Covington of Jefferson Airplane, Alfonso Carey of The Village People, and Carl Spagnuolo of Jay & The Techniques. In an interview with Holger Petersen, on "Saturday Night Blues" on CBC Radio in the fall of 2006, He commented on racism in the music industry establishment during his early career, which deprived him of royalties from the most successful part of his career. His final guitar performance on a studio album was with the New York Dolls on their 2006 album "One Day It Will Please Us to Remember Even This". He contributed guitar work to the song "Seventeen", which was included as a bonus track on the limited-edition version of the disc. In May 2007, Diddley suffered a stroke after a concert the previous day in Council Bluffs, Iowa. Nonetheless, he delivered an energetic performance to an enthusiastic crowd. A few months later he had a heart attack. While recovering, Diddley came back to his hometown of McComb, Mississippi, in early November 2007, for the unveiling of a plaque devoted to him on the Mississippi Blues Trail. This marked his achievements and noted that he was "acclaimed as a founder of rock-and-roll." He was not supposed to perform, but as he listened to the music of local musician Jesse Robinson, who sang a song written for this occasion, Robinson sensed that Diddley wanted to perform and handed him a microphone, the only time that he performed publicly after his stroke. On June 25, 2019, "The New York Times Magazine" listed Bo Diddley among hundreds of artists whose material was reportedly destroyed in the 2008 Universal fire. Bo Diddley was married four times. His first marriage, at 18, to Louise Woolingham, lasted a year. Diddley married his second wife Ethel Mae Smith in 1949; they had two children. He met his third wife, Kay Reynolds, when she was 15, while performing in Birmingham, Alabama. They soon moved in together and married, despite taboos against interracial marriage. They had two daughters. He married his fourth wife, Sylvia Paiz, in 1992; they were divorced at the time of his death. On May 13, 2007, Diddley was admitted to intensive care in Creighton University Medical Center in Omaha, Nebraska, following a stroke after a concert the previous day in Council Bluffs, Iowa. Starting the show, he had complained that he did not feel well. He referred to smoke from the wildfires that were ravaging south Georgia and blowing south to the area near his home in Archer, Florida. The next day, as he was heading back home, he seemed dazed and confused at the airport; 911 and airport security were called, and he was immediately taken by ambulance to Creighton University Medical Center where he stayed for several days. After tests, it was confirmed that he had suffered a stroke. Diddley had a history of hypertension and diabetes, and the stroke affected the left side of his brain, causing receptive and expressive aphasia (speech impairment). The stroke was followed by a heart attack, which he suffered in Gainesville, Florida, on August 28, 2007. Bo Diddley died on June 2, 2008, of heart failure at his home in Archer, Florida. Garry Mitchell, his grandson and one of more than thirty-five family members at the musician's home when he died at 1:45 am. EDT, said his death was not unexpected. "There was a gospel song that was sung (at his bedside) and (when it was done) he said 'wow' with a thumbs up," Mitchell told Reuters, when asked to describe the scene at the deathbed. "The song was 'Walk Around Heaven' and in his last words he said 'I'm going to heaven.'" He was survived by his children, Evelyn Kelly, Ellas A. McDaniel, Pamela Jacobs, Steven Jones, Terri Lynn McDaniel-Hines, and Tammi D. McDaniel; a brother, the Rev. Kenneth Haynes; and eighteen grandchildren, fifteen great-grandchildren and three great-great-grandchildren. His funeral, a four-hour "homegoing" service, took place on June 7, 2008, at Showers of Blessings Church in Gainesville, Florida, and kept in tune with the vibrant spirit of Bo Diddley's life and career. The many in attendance chanted "Hey Bo Diddley" as the Archer Church of GOD in Christ gospel band played music with a nod to the American music icon. A number of notable musicians sent flowers, including Little Richard, George Thorogood, Tom Petty and Jerry Lee Lewis. Little Richard, who had been asking his audiences to pray for Bo Diddley throughout his illness, had to fulfill concert commitments in Westbury and New York City the weekend of the funeral. He took time at both concerts to remember his friend of a half-century, performing Bo's namesake tune in his honor. After the funeral service, a tribute concert was held at the Martin Luther King Center in Gainesville, Florida, and featured guest performances by his son and daughter, Ellas A. McDaniel and Evelyn "Tan" Cooper; long-time background vocalist and original Boette, Gloria Jolivet; long-time friend, co-producer, and former Bo Diddley & Offspring guitarist Scott "Skyntyte" Free; and Eric Burdon. In the days following his death, tributes were paid by then-President George W. Bush, the United States House of Representatives, and many musicians and performers, including B. B. King, Ronnie Hawkins, Mick Jagger, Ronnie Wood, George Thorogood, Eric Clapton, Tom Petty, Robert Plant, Elvis Costello, Bonnie Raitt, Robert Randolph and the Family Band and Eric Burdon. Burdon used video footage of the McDaniel family and friends in mourning for a video promoting his ABKCO Records release "Bo Diddley Special". In November 2009, the guitar used by Bo Diddley in his final stage performance sold for $60,000 at auction. All twenty-two beneficiaries of his estate sought a forensic accounting of his estate, but were denied without explanation. The current value is unknown to the heirs. Bo Diddley was posthumously awarded a Doctor of Fine Arts degree by the University of Florida for his influence on American popular music. In its "People in America" radio series, about influential people in American history, the Voice of America radio service paid tribute to him, describing how "his influence was so widespread that it is hard to imagine what rock and roll would have sounded like without him." Mick Jagger stated that "he was a wonderful, original musician who was an enormous force in music and was a big influence on the Rolling Stones. He was very generous to us in our early years and we learned a lot from him". Jagger also praised the late star as a one-of-a-kind musician, adding, "We will never see his like again". The documentary film "" by director Phil Ranstrom features Bo Diddley's last on-camera interview. He achieved numerous accolades in recognition of his significant role as one of the founding fathers of rock and roll. In 2003, U.S. Representative John Conyers paid tribute to Bo Diddley in the United States House of Representatives, describing him as "one of the true pioneers of rock and roll, who has influenced generations". In 2004, Mickey and Sylvia's 1956 recording of "Love Is Strange" (a song first recorded by Bo Diddley but not released until a year before his death) was inducted into the Grammy Hall of Fame as a recording of qualitative or historical significance. Also in 2004, Bo Diddley was inducted into the Blues Foundation's Blues Hall of Fame and was ranked number 20 on "Rolling Stone" magazine's list of the 100 Greatest Artists of All Time. In 2005, Bo Diddley celebrated his 50th anniversary in music with successful tours of Australia and Europe and with coast-to-coast shows across North America. He performed his song "Bo Diddley" with Eric Clapton and Robbie Robertson at the Rock and Roll Hall of Fame's 20th annual induction ceremony. In the UK, "Uncut" magazine included his 1957 debut album, "Bo Diddley", in its listing of the '100 Music, Movie & TV Moments That Have Changed the World'. Bo Diddley was honored by the Mississippi Blues Commission with a Mississippi Blues Trail historic marker placed in McComb, his birthplace, in recognition of his enormous contribution to the development of the blues in Mississippi. On June 5, 2009, the city of Gainesville, Florida, officially renamed and dedicated its downtown plaza the Bo Diddley Community Plaza. The plaza was the site of a benefit concert at which Bo Diddley performed to raise awareness about the plight of the homeless in Alachua County and to raise money for local charities, including the Red Cross. The "Bo Diddley beat" is essentially the clave rhythm, one of the most common bell patterns found in sub-Saharan African music traditions. One scholar found this rhythm in 13 rhythm and blues recordings made in the years 1944–55, including two by Johnny Otis from 1948. Bo Diddley gave different accounts of how he began to use this rhythm. Sublette asserts, "In the context of the time, and especially those maracas [heard on the record], 'Bo Diddley' has to be understood as a Latin-tinged record. A rejected cut recorded at the same session was titled only 'Rhumba' on the track sheets." The Bo Diddley beat is similar to "hambone", a style used by street performers who play out the beat by slapping and patting their arms, legs, chest, and cheeks while chanting rhymes. Somewhat resembling the "shave and a haircut, two bits" rhythm, Diddley came across it while trying to play Gene Autry's "(I've Got Spurs That) Jingle, Jangle, Jingle". Three years before his "Bo Diddley", a song with similar syncopation "Hambone", was cut by the Red Saunders Orchestra with the Hambone Kids. In 1944, "Rum and Coca Cola", containing the Bo Diddley beat, was recorded by the Andrews Sisters. Buddy Holly's "Not Fade Away" (1957) and Them's "Mystic Eyes" (1965) used the beat. Many songs (for example, "Hey Bo Diddley" and "Who Do You Love?") often have no chord changes; that is, the musicians play the same chord throughout the piece, so that the rhythms create the excitement, rather than having the excitement generated by harmonic tension and release. In his other recordings, Bo Diddley used various rhythms, from straight back beat to pop ballad style to doo-wop, frequently with maracas by Jerome Green. An influential guitar player, Bo Diddley developed many special effects and other innovations in tone and attack, particularly the resonant "shimmering" sound. His trademark instrument was his self-designed, one-of-a-kind, rectangular-bodied "Twang Machine" (referred to as "cigar-box shaped" by music promoter Dick Clark), built by Gretsch. He had other uniquely shaped guitars custom-made for him by other manufacturers throughout the years, most notably the "Cadillac" and the rectangular "Turbo 5-speed" (with built-in envelope filter, flanger and delay) designs, made by Tom Holmes (who also made guitars for ZZ Top's Billy Gibbons, among others). In a 2005 interview on JJJ radio in Australia, he implied that the rectangular design sprang from an embarrassing moment. During an early gig, while jumping around on stage with a Gibson L5 guitar, he landed awkwardly, hurting his groin. He then went about designing a smaller, less-restrictive guitar that allowed him to keep jumping around on stage while still playing his guitar. He also played the violin, which is featured on his mournful instrumental "The Clock Strikes Twelve", a twelve-bar blues. He often created lyrics as witty and humorous adaptations of folk music themes. The song "Bo Diddley" was based on the African-American clapping rhyme "Hambone" (which in turn was based on the lullaby "Hush Little Baby"). Likewise, "Hey Bo Diddley" is based on the song "Old MacDonald". The song "Who Do You Love?" with its rap-style boasting, and his use of the African-American game known as "the dozens" on the songs "Say Man" and "Say Man, Back Again," are cited as progenitors of hip-hop music (for example, "You got the nerve to call somebody ugly. Why, you so ugly, the stork that brought you into the world ought to be arrested").
https://en.wikipedia.org/wiki?curid=5033
Bela Lugosi Béla Ferenc Dezső Blaskó (; 20 October 1882 – 16 August 1956), known professionally as Bela Lugosi (; ), was a Hungarian-American actor best remembered for portraying Count Dracula in the 1931 film and for his roles in other horror films. After playing small parts on the stage in his native Hungary, Lugosi gained his first role in a film in 1917. He had to leave the country after the failed Hungarian Communist Revolution of 1919 because of his socialist activism. He acted in several films in Weimar Germany before arriving in the United States as a seaman on a merchant ship. In 1927, he appeared as Count Dracula in a Broadway adaptation of Bram Stoker's novel. He later appeared in the 1931 film "Dracula" directed by Tod Browning and produced by Universal Pictures. Through the 1930s, he occupied an important niche in horror films, with their East European setting, but his Hungarian accent limited his potential casting, and he unsuccessfully tried to avoid typecasting. Meanwhile, he was often paired with Boris Karloff, who was able to demand top billing. To his frustration, Lugosi, a charter member of the American Screen Actors Guild, was increasingly restricted to minor parts, kept employed by the studio principally so that they could put his name on the posters. Among his pairings with Karloff, he performed major roles only in "The Black Cat" (1934), "The Raven" (1935), and "Son of Frankenstein" (1939); even in "The Raven", Karloff received top billing despite Lugosi performing the lead role. By this time, Lugosi had been receiving regular medication for sciatic neuritis, and he became addicted to morphine and methadone. This drug dependence was known to producers, and the offers eventually dwindled to a few parts in Ed Wood's low-budget films—including a brief appearance in "Plan 9 from Outer Space" (1959). Lugosi was married five times and had one son, Bela George. Lugosi, the youngest of four children, was born Béla Ferenc Dezső Blaskó in Lugoj Timiș County, Banat, western Romania to Hungarian father István Blaskó, a banker, and Serbian-born mother Paula de Vojnich. He later based his last name on his hometown. He and his sister Vilma were raised in a Roman Catholic family. At the age of 12, Lugosi dropped out of school. He began his acting career in 1901 or 1902. His earliest known performances are from provincial theatres in the 1903–04 season, playing small roles in several plays and operettas. He went on to perform in Shakespeare's plays. After moving to Budapest in 1911, he played dozens of roles with the National Theatre of Hungary between 1913–19. Although Lugosi would later claim that he "became the leading actor of Hungary's Royal National Theatre", almost all his roles there were small or supporting parts. During World War I, he served as an infantryman in the Austro-Hungarian Army from 1914–16, rising to the rank of Lieutenant. He was awarded the Wound Medal for wounds he suffered while serving on the Russian front. Due to his activism in the actors' union in Hungary during the revolution of 1919, he was forced to flee his homeland. He went first to Vienna before settling in Berlin (in the Langestrasse), where he continued acting. He took the name "Lugosi" in 1903 to honor his birthplace, and eventually travelled to New Orleans, Louisiana as a crewman aboard a merchant ship. Lugosi's first film appearance was in the movie "Az ezredes" ("The Colonel", 1917). When appearing in Hungarian silent films, he used the stage name Arisztid Olt. Lugosi made 12 films in Hungary between 1917 and 1918 before leaving for Germany. Following the collapse of Béla Kun's Hungarian Soviet Republic in 1919, leftists and trade unionists became vulnerable. Lugosi was proscribed from acting due to his participation in the formation of an actors' union. Exiled in Weimar-era Germany, he began appearing in a small number of well-received films, among them adaptations of the Karl May novels "On the Brink of Paradise" ("Auf den Trümmern des Paradieses", 1920) and "Caravan of Death" ("Die Todeskarawane", also 1920) with Dora Gerson (Gerson, who was Jewish, died in Auschwitz). Lugosi left Germany in October 1920, intending to emigrate to the United States, and entered the country at New Orleans in December 1920. He made his way to New York and was inspected by immigration officers at Ellis Island in March 1921. He declared his intention to become a US citizen in 1928; on June 26, 1931, he was naturalized. On his arrival in America, the , Lugosi worked for some time as a laborer, and then entered the theater in New York City's Hungarian immigrant colony. With fellow expatriate Hungarian actors he formed a small stock company that toured Eastern cities, playing for immigrant audiences. Lugosi acted in several Hungarian plays before breaking out into his first English Broadway play, "The Red Poppy", in 1922. Three more parts came in 1925–26, including a five-month run in the comedy-fantasy "The Devil in the Cheese". In 1925, he appeared as an Arab Sheik in "Arabesque" which premiered in Buffalo, New York at the Teck Theatre before moving to Broadway. His first American film role was in the melodrama "The Silent Command" (1923). Several more silent roles followed, villains and continental types, all in productions made in the New York area. Lugosi was approached in the summer of 1927 to star in a Broadway theatre production of "Dracula", which had been adapted by Hamilton Deane and John L. Balderston from Bram Stoker's 1897 novel. The Horace Liveright production was successful, running for 261 performances before touring the United States to much fanfare and critical acclaim throughout 1928 and 1929. In 1928, Lugosi decided to stay in California when the play ended its West Coast run. His performance had piqued the interest of Fox Film, and he was cast in the studio's silent film "The Veiled Woman" (1929). He also appeared in the film "Prisoners" (also 1929), believed lost, which was released in both silent and talkie versions. In 1929, with no other film roles in sight, he returned to the stage as Dracula for a short West Coast tour of the play. Lugosi remained in California where he resumed his film work under contract with Fox, appearing in early talkies often as a heavy or an "exotic sheik". He also continued to lobby for his prized role in the film version of "Dracula". Despite his critically acclaimed performance on stage, Lugosi was not Universal Pictures' first choice for the role of Dracula when the company optioned the rights to the Deane play and began production in 1930. Different prominent actors were considered before Browning cast Lugosi for the role, but the film was a hit. Through his association with "Dracula" (in which he appeared with minimal makeup, using his natural, heavily accented voice), Lugosi found himself typecast as a horror villain in films such as "Murders in the Rue Morgue" (1932), "The Raven" (1935), and "Son of Frankenstein" (1939) for Universal, and the independent "White Zombie" (1932). His accent, while a part of his image, limited the roles he could play. Lugosi did attempt to break type by auditioning for other roles. He lost out to Lionel Barrymore for the role of Grigori Rasputin in "Rasputin and the Empress" (also 1932); C. Henry Gordon for the role of Surat Khan in "Charge of the Light Brigade" (1936), and Basil Rathbone for the role of Commissar Dimitri Gorotchenko in "Tovarich" (1937), a role Lugosi had played on stage. He played the elegant, somewhat hot-tempered General Nicholas Strenovsky-Petronovich in "International House" (1933). Regardless of controversy, five films at Universal — "The Black Cat" (1934), "The Raven" (1935), "The Invisible Ray" (1936), "Son of Frankenstein" (1939), "Black Friday" (1940), plus minor cameo performances in "Gift of Gab" (1934) and two at RKO Pictures, "You'll Find Out" (1940) and "The Body Snatcher" (1945) — paired Lugosi with Boris Karloff. Despite the relative size of their roles, Lugosi inevitably received second billing, below Karloff. There are contradictory reports of Lugosi's attitude toward Karloff, some claiming that he was openly resentful of Karloff's long-term success and ability to gain good roles beyond the horror arena, while others suggested the two actors were — for a time, at least — good friends. Karloff himself in interviews suggested that Lugosi was initially mistrustful of him when they acted together, believing that the Englishman would attempt to upstage him. When this proved not to be the case, according to Karloff, Lugosi settled down and they worked together amicably (though some have further commented that the English Karloff's on-set demand to break from filming for mid-afternoon tea annoyed Lugosi). Karloff also insinuated that his rival could not act, claiming Lugosi had "never learned his trade". A small percentage of critics cited his "dull and slow performance" in "Dracula" as a great example of minimal dialogue with no real acting prowess needed. Lugosi did get a few heroic leads, as in Universal's "The Black Cat" after Karloff had been accorded the more colorful role of the villain, "The Invisible Ray", and a romantic role in producer Sol Lesser's adventure serial "The Return of Chandu" (1934), but his typecasting problem appears to have been too entrenched to be alleviated by those films. Lugosi addressed his plea to be cast in non-horror roles directly to casting directors through his listing in the 1937 "Players Directory", published by the Academy of Motion Picture Arts and Sciences, in which he (or his agent) calls the idea that he is only fit for horror films "an error." A number of factors began to work against Lugosi's career in the mid-1930s. Universal changed management in 1936, and because of a British ban on horror films , dropped them from their production schedule; Lugosi found himself consigned to Universal's non-horror B-film unit, at times in small roles where he was obviously used for "name value" only. Throughout the 1930s, Lugosi, experiencing a severe career decline despite popularity with audiences (Universal executives always preferred his rival Karloff), accepted many leading roles from independent producers like Nat Levine, Sol Lesser, and Sam Katzman. These low-budget thrillers indicate that Lugosi was less discriminating than Karloff in selecting screen vehicles, but the exposure helped Lugosi financially if not artistically. Lugosi tried to keep busy with stage work, but had to borrow money from the Actors Fund of America to pay hospital bills when his only child, Bela George Lugosi, was born in 1938. Historian John McElwee reports, in his 2013 book "Showmen, Sell It Hot!", that Bela Lugosi's popularity received a much-needed boost in August 1938, when California theater owner Emil Umann revived "Dracula" and "Frankenstein" as a special double feature. The combination was so successful that Umann scheduled extra shows to accommodate the capacity crowds, and invited Lugosi to appear in person, which thrilled new audiences that had never seen Lugosi's classic performance. "I owe it all to that little man at the Regina Theatre," said Lugosi of exhibitor Umann. "I was dead, and he brought me back to life." Universal took notice of the tremendous business and launched its own national re-release of the same two horror favorites. The studio then rehired Lugosi to star in new films. The first was Universal's "Son of Frankenstein" (1939), when he played the character role of Ygor, a mad blacksmith with a broken neck, in heavy makeup and beard. The same year saw Lugosi making a rare appearance in an A-list motion picture: he was a stern Soviet commissar in Metro-Goldwyn-Mayer's romantic comedy "Ninotchka", starring Greta Garbo and directed by Ernst Lubitsch. Lugosi was effective in this small but prestigious character role, and it could have been a turning point for the actor, but within the year, he was back on Hollywood's Poverty Row, playing leads for Sam Katzman. These horror, comedy and mystery B-films were released by Monogram Pictures. At Universal, he often received star billing for what amounted to a supporting part. "The Gorilla" (1939) had him playing straight man to Patsy Kelly and the Ritz Brothers. Ostensibly due to injuries received during military service, Lugosi developed severe, chronic sciatica. Though at first he was treated with pain remedies such as asparagus juice, doctors increased the medication to opiates. The growth of his dependence on analgesic drugs, particularly morphine and, after 1947 when it became available in America, methadone, was directly proportional to the dwindling of screen offers. He was finally cast in the role of Frankenstein's monster for Universal's "Frankenstein Meets the Wolf Man" (1943), but Lugosi had no dialogue. Lugosi's voice had been dubbed over that of Lon Chaney Jr., from line readings at the end of "The Ghost of Frankenstein" (1942). Lugosi played Dracula for a second and last time on film in "Abbott and Costello Meet Frankenstein" (1948). "Abbott and Costello Meet Frankenstein" was Bela Lugosi's last "A" movie. For the remainder of his life he appeared — less and less frequently — in obscure, low-budget features. From 1947 to 1950, he performed in summer stock, often in productions of "Dracula" or "Arsenic and Old Lace", and during the other parts of the year made personal appearances in a touring "spook show", and on early commercial television. In September 1949 Milton Berle invited Lugosi to appear in a sketch on "Texaco Star Theatre". Lugosi memorized the script for the skit, but became confused on the air when Berle began to ad lib. His only television dramatic role was on the anthology series "Suspense" on October 11, 1949, in an adaptation of Edgar Allan Poe's "The Cask of Amontillado". In 1951, while in England to play a six-month tour of "Dracula", Lugosi co-starred in a lowbrow film comedy, "Mother Riley Meets the Vampire" (also known as "Vampire over London" and "My Son, the Vampire"), released the following year. Following his return to the United States, he was interviewed for television, and reflected wistfully on his typecasting in horror parts: "Now I am the boogie man". In the same interview he expressed a desire to play more comedy, as he had in the "Mother Riley" farce. Independent producer Jack Broder took Lugosi at his word, casting him in a jungle-themed comedy, "Bela Lugosi Meets a Brooklyn Gorilla" (1952), co-starring nightclub comedians Duke Mitchell and Sammy Petrillo, whose act closely resembled that of Dean Martin and Jerry Lewis. Lugosi enjoyed a lively career on stage. with plenty of personal appearances. As film offers declined, he became more and more dependent on live venues to support his family. Lugosi took over the role of Jonathan Brewster from Boris Karloff for "Arsenic and Old Lace". Lugosi had also expressed interest in playing Elwood P. Dowd in "Harvey" to help himself professionally. He also made plenty of personal appearances to promote his horror image and/or an accompanying film. Late in his life, Bela Lugosi again received star billing in films when the ambitious but financially limited filmmaker Ed Wood, a fan of Lugosi, found him living in obscurity and near-poverty and offered him roles in his films, such as an anonymous narrator in "Glen or Glenda" (1953) and a Dr. Frankenstein-like mad scientist in "Bride of the Monster" (1955). During post-production of the latter, Lugosi decided to seek treatment for his drug addiction, and the premiere of the film was said to be intended to help pay for his hospital expenses. According to Kitty Kelley's biography of Frank Sinatra, when the entertainer heard of Lugosi's problems, he helped with expenses and visited Lugosi at the hospital. Sinatra would recall Lugosi's amazement at his visit, since the two men had never met before. During an impromptu interview upon his exit from the treatment center in 1955, Lugosi stated that he was about to go to work on a new Ed Wood film, "The Ghoul Goes West". This was one of several projects proposed by Wood, including "The Phantom Ghoul" and "Dr. Acula". With Lugosi in his Dracula cape, Wood shot impromptu test footage, with no storyline in mind, in front of Tor Johnson's home, a suburban graveyard, and in front of Lugosi's apartment building on Carlton Way. This footage ended up in "Plan 9 from Outer Space" (1959), which was mostly filmed after Lugosi died. Wood hired Tom Mason, his wife's chiropractor, to double for Lugosi in additional shots. Mason was noticeably taller and thinner than Lugosi, and had the lower half of his face covered with his cape in every shot, as Lugosi sometimes did in "Abbott and Costello Meet Frankenstein". Following his treatment, Lugosi made one final film, in late 1955, "The Black Sleep", for Bel-Air Pictures, which was released in the summer of 1956 through United Artists with a promotional campaign that included several personal appearances. To Lugosi's disappointment, however, his role in this film was that of a mute, with no dialogue. In 1917, Lugosi married Ilona Szmik (1898-1991). The couple divorced in 1920, reputedly over political differences with her parents. In 1921, he married Ilona von Montagh, and divorced in 1924. In 1929, Lugosi took his place in Hollywood society and scandal when he married wealthy San Francisco resident Beatrice Weeks (1897-1931), widow of architect Charles Peter Weeks. She filed for divorce four months later, citing actress Clara Bow as the "other woman". On 26 June 1931, Lugosi became a naturalized United States citizen. In 1933, he married 22-year-old Lillian Arch (1911-1981), the daughter of Hungarian immigrants. They had a child, Bela G. Lugosi, in 1938. Bela had four grandchildren and six great-grandchildren. Lillian and Bela, as well as his mother, vacationed on their lake property in Lake Elsinore, California (then called Elsinore), on two lots between 1944 and 1953. Bela Lugosi Jr. attended the Elsinore Naval & Military School in Lake Elsinore. Lillian and Béla divorced in 1953, at least partially because of Béla's jealousy over Lillian taking a full-time job as an assistant to Brian Donlevy on the sets and studios for Donlevy's radio and television series "Dangerous Assignment" – Lillian eventually did marry Brian Donlevy, in 1966. Lugosi married Hope Lininger, his fifth wife, in 1955; they remained married until his death. She had been a fan, writing letters to him when he was in the hospital, recovering from addiction to Demerol. She would sign her letters "A dash of Hope". She died in 1997 at age 78. Lugosi died of a heart attack on 16 August 1956, while lying on a bed in his Los Angeles apartment. He was 73. The rumor that Lugosi was clutching the script for "The Final Curtain", a planned Ed Wood project, at the time of his death is not true. Lugosi was buried wearing one of the "Dracula" cape costumes in the Holy Cross Cemetery in Culver City, California. Contrary to popular belief, Lugosi never requested to be buried in his cloak; Bela G. Lugosi confirmed on numerous occasions that he and his mother, Lillian, actually made the decision but believed that it is what his father would have wanted. In 1979, the "Lugosi v. Universal Pictures" decision by the California Supreme Court held that Lugosi's personality rights could not pass to his heirs, as a copyright would have. The court ruled that under California law any rights of publicity, including the right to his image, terminated with Lugosi's death. In Tim Burton's "Ed Wood", Lugosi is portrayed by Martin Landau, who received the 1994 Academy Award for Best Supporting Actor for the performance. According to Bela G. Lugosi (his son), Forrest Ackerman, Dolores Fuller and Richard Sheffield, the film's portrayal of Lugosi is inaccurate: In real life, he never used profanity, owned small dogs, or slept in coffins. And contrary to this film, Bela did not struggle performing on "The Red Skelton Show". Three Lugosi projects were featured on the television show "Mystery Science Theater 3000". The 1942 film "The Corpse Vanishes" appeared in episode 105; the serial "The Phantom Creeps" appeared throughout season two, and the Ed Wood production "Bride of the Monster" in episode 423. An episode of "Sledge Hammer!" titled "Last of the Red Hot Vampires" was an homage to Bela Lugosi; at the end of the episode, it was dedicated to "Mr. Blasko". In 2001, BBC Radio 4 broadcast "There Are Such Things" by Steven McNicoll and Mark McDonnell. Focusing on Lugosi and his well-documented struggle to escape from the role that had typecast him, the play went on to receive the Hamilton Dean Award for best dramatic presentation from the Dracula Society in 2002. A bust of Lugosi was erected on one of the corners of Vajdahunyad Castle in Budapest. The Ellis Island Immigration Museum in New York City features a live 30-minute play that focuses on Lugosi's illegal entry into the country and then his arrival at Ellis Island to enter the country legally. The cape Lugosi wore in "Dracula" (1931) was in the possession of his family until it was put up for auction in 2011. It was expected to sell for up to $2 million, but has since been listed again by Bonhams in 2018. In 2019 the Academy Museum of Motion Pictures announced acquisition of the cape via partial donation from the Lugosi family and that the cape will be on display in 2020. The theatrical play "Lugosi - the Shadow of the Vampire" () is based on Lugosi's life, telling the story of his life as he became typecast as Dracula and as his drug addiction worsened. He was played by one of Hungary's most renowned actors, Ivan Darvas. Andy Warhol's 1963 silkscreen "The Kiss" depicts Lugosi from "Dracula" about to bite into the neck of co-star Helen Chandler, who played Mina Harker. A copy sold for $798,000 at Christie's in May 2000. Lugosi was also the subject of "Bela Lugosi's Dead", the first single by the English band Bauhaus. Released in August 1979, it is often considered to be the first gothic rock record. Lugosi's star on the Hollywood Walk of Fame is mentioned in "Celluloid Heroes", a song performed by The Kinks and written by their lead vocalist and principal songwriter, Ray Davies. It appeared on their 1972 album "Everybody's in Show-Biz". A fan of the occult, Lugosi's haunted mirror is on display at Zak Bagans' haunted museum in Las Vegas, Nevada. According to Paru Itagaki, the creator of the Japanese manga/anime "Beastars", the main character Legosi was inspired by Bela Lugosi (sharing similarities regarding the names and physical imposing figure).
https://en.wikipedia.org/wiki?curid=5034
Bride of the Monster Bride of the Monster is a 1955 American science fiction horror film directed, written and produced by Edward D. Wood Jr., and starring Bela Lugosi and Tor Johnson with a supporting cast featuring Tony McCoy and Loretta King. The film is considered to have Wood's biggest budget ($70,000). Production commenced in 1953 but, due to further financial problems, was not completed until 1955. It was released in May 1955, initially on a double bill with "Macumba". A sequel, entitled "Night of the Ghouls", was finished in 1959, but due to last-minute financial problems, was not released until 1984. In a stretch of woods, two hunters are caught in a "raging thunderstorm". They decide to seek refuge in Willows House, which is supposedly abandoned and haunted. When they reach Willows House, they find it to be occupied and the current owner repeatedly denies them hospitality. One of the hunters attempts to force his entry into the house, but a giant octopus is released from its tank and sent after the intruders. One of the fleeing hunters is killed by the octopus, while the other is captured by the giant. The owner is a scientist, Dr. Eric Vornoff, and the giant is his mute assistant, Lobo. Vornoff explains that he will perform an experiment on the unwilling hunter, who dies on the operating table. In a police station, Officer Tom Robbins sees Lieutenant Dick Craig. There are now 12 missing victims, and the police still do not know what happened to them. The reporter behind the newspaper reports is Janet Lawton, Craig's fiancée. Janet forces her way into the office and argues with Robbins, and vows to go to Lake Marsh to investigate. At the police station, Robbins and Craig have a meeting with an intellectual from Europe, Professor Vladimir Strowski, who agrees to assist the police in investigating the Marsh, but not at night. As night falls and another storm begins, Janet drives alone to Lake Marsh, but visibility is poor and she drives off the road and into a ravine. Lobo rescues her. Janet awakens to find herself a prisoner of Vornoff, who uses hypnosis to put her back to sleep. The following day, Craig and his partner drive to the area around Lake Marsh, a swamp. The partners also discuss the strange weather and mention that the newspapers could be right about "the atom bomb explosions distorting the atmosphere". The duo eventually discover Janet's abandoned car and realize she is the 13th missing victim. They leave the swamp while Strowski drives a rented car to the swamp. Janet awakens at Willows House. Vornoff assures her that Lobo is harmless, but the giant seems fascinated with the female captive and approaches her with questionable intent. Vornoff explains the giant is human and that Vornoff found him in the "wilderness of Tibet". Vornoff then hypnotically places Janet back to sleep. He orders Lobo to transport the captive to Vornoff's private quarters. Meanwhile, Strowski silently approaches Willows House and enters through the unlocked front door. While Strowski searches the house, Vornoff arrives to greet him. Their country of origin is interested in Vornoff's groundbreaking experiments with atomic energy and wants to recruit him. Vornoff narrates that two decades prior, Vornoff had suggested using experiments with nuclear power which could create superhumans of great strength and size. In response, he was branded a madman and exiled by his country. Strowski reveals that he has dreams of conquest in the name of their country, while Vornoff dreams of his creations conquering in his own name. By late evening, Craig and his partner return to the swamp and discover Strowski's abandoned car. The partners split up to search the area, Craig heading towards Willows House. Back in the secret laboratory, Vornoff uses a wave of his hand to summon Janet to his current location. She arrives dressed as a bride, summoned through telepathy. He has decided to use her as the next subject of his experiments. Lobo is reluctant to take part in this experiment, and Vornoff uses a whip to re-assert his control over his slave and assistant. Meanwhile, Craig has entered the house and accidentally discovers the secret passage. He is himself captured by Vornoff and Lobo. As the experiment is about to begin, Lobo is visibly distressed. Making his decision, Lobo rebels and attacks Vornoff. After a fight, Lobo knocks Vornoff out, releases Janet, and transports the unconscious Vornoff to the operating table. The scientist becomes the subject of his own human experiment. This time the experiment works and Vornoff is transformed into an atomic-powered superhuman being. He and Lobo physically struggle with each other, and their fight destroys the laboratory and starts a fire. Vornoff grabs Janet and escapes from the flames. Robbins and other officers arrive to help Craig. The police pursue Vornoff through the woods. There is another thunderstorm, and a lightning strike further destroys Willows House. With his home and equipment destroyed, a distressed Vornoff abandons Janet and merely attempts to escape. Craig rolls a rock at him and lands him in the water with the octopus. They struggle until a nuclear explosion obliterate both combatants, apparently the end result of the chain reaction started at the destroyed laboratory. Robbins comments that Vornoff "tampered in God's domain". The first incarnation of the film was a 1953 script by Alex Gordon titled "The Atomic Monster", but a lack of financing prevented any production. Later Ed Wood revived the project as "The Monster of the Marshes". Actual shooting began in October 1954 at the Ted Allan Studios, but further money problems quickly halted the production. The required funds were supplied by a rancher named Donald McCoy, who became the film's producer. He also provided his son to star as the film's hero. According to screenwriter Dennis Rodriguez, casting the younger McCoy as a protagonist was one of two terms Donald imposed on Wood. The other term was to include an atomic explosion in the finale. Production resumed in 1955 at Centaur Studios. The film premiered at Hollywood's Paramount theater in May 1955, under the title "Bride of the Atom". The film was reportedly completed and released through a deal with Samuel Z. Arkoff. Arkoff profited from the film more than Wood, and his earnings contributed to the funding of American International Pictures. The ending credits identify the copyright holder of the film as "Filmakers Releasing Organization". Distribution rights were held by Banner Films in the United States, and by Exclusive in the United Kingdom. The film combines elements of science fiction and horror fiction, genres which were frequently combined in films of the 1950s. Like many of these contemporaries, "Bride" serves in part as a Cold War propaganda film. Once again, an external threat from "Old Europe" serves as the enemy of the righteous United States. In Cold War thrillers, foreign nations served as a vilified and demonized Other for American audiences. The country of origin for Vornoff and Strowski is left unnamed. The only clues is that it is European and has its own dreams of conquest. By implication, the country which exiled Vornoff in the 1930s could be Nazi Germany or the Soviet Union. Their role as villains for the American cinema had already been solidified by the 1950s, and Wood could be alluding to both of them. Strowski uses the term master race, which is a key concept in Nazism. Both the working title "Bride of the Atom" and the final title "Bride of the Monster" allude to the earlier film "Bride of Frankenstein" (1935). The film otherwise follows the template of the Poverty Row horror films of the 1940s. The Atomic Age influences the film in its ominous implications concerning nuclear weapons and the threat they posed towards human civilization. Rob Craig makes an argument for including the film in a subgenre of Cold War-themed thrillers along with "Kiss Me Deadly" (1955), "The World, the Flesh and the Devil" (1959), "On the Beach" (1959), "The Manchurian Candidate" (1962), "Dr. Strangelove" (1964), "Seven Days in May" (1964) and "Fail-Safe" (1964). This was Bela Lugosi's last speaking role in a feature film. Lugosi subsequently played a silent part in "The Black Sleep" (1956). "Plan 9 from Outer Space" (1959) uses silent archive footage of Lugosi, but he died prior to the creation of its script. The footage was from an unfinished Ed Wood film that was to be called "The Vampire's Tomb". "Lock Up Your Daughters" (1959) recycled footage from Lugosi's earlier films, possibly mixed with some new material. According to Rob Craig, in "Bride" Lugosi for the last time plays "a charismatic villain whose megalomania leads to downfall and destruction". Craig considers this to be one of Lugosi's finest roles, citing the surprisingly energetic performance of the aging actor. The scenes involving hypnosis contain close-ups of Lugosi's eyes. Wood was probably trying to recreate similar scenes from an older film of Lugosi, "White Zombie" (1932). Lugosi did not actually play Vornoff in the scenes demanding physicality. The film made use of body doubles for Lugosi: Eddie Parker and Red Reagan. Parker was also the body-double of Lugosi in "Frankenstein Meets the Wolf Man" (1943). Lugosi's fee for the film is estimated to have been 1000 dollars. The story is similar to an earlier Bela Lugosi movie, "The Corpse Vanishes". In both movies, each bride at her wedding was given an orchid, which she sniffed before passing out. In "The Corpse Vanishes", Lugosi played a doctor who captured the brides and took some kind of liquid from each bride's body and injected it into his wife to make her temporarily young again. Characters included his wife, an old woman, the old woman's grown son, and a dwarf. In "Bride of the Monster", Lugosi again plays a doctor doing experiments, but his only housemate/assistant is Lobo, and when his experiment fails to turn someone into an "atomic-powered superman", he throws the dead subject to an octopus or an alligator, similar to Lugosi throwing a body into a river in "Murders in the Rue Morgue". The hunters of the opening scenes are unnamed in the actual scenes, but identified later in the film as Jake Long and Blake "Mac" McCreigh. According to the credits, Jake was played by John Warren and Mac by Bud Osborne. The police station scenes feature cameos by a drunk and a newspaper seller. The former is played by Ben Frommer (known for playing Count Bloodcount in "Transylvania 6-5000"), the latter is played by William Benedict (known as one of The Bowery Boys). Janet Lawton briefly speaks with a co-worker called Margie. Margie is played by Dolores Fuller. Dick Craig's partner, Martin, is played by Don Nagel. Both Fuller and Nagel had worked with Wood in "Jail Bait" (1954). The film uses both stock footage of both a real octopus and a fake, rubber octopus in scenes where "the monster" interacts with the actors. It is widely believed this was a prop from the John Wayne film "Wake of the Red Witch" (1948). Contradictory accounts claim that Wood either stole or legally rented the prop from Republic Pictures, which produced the earlier film. The struggle between Vornoff and the octopus was filmed at Griffith Park. Craig comments that there is a stark contrast between the characters of Dick Craig and Janet Lawton. Dick speaks in a deadpan unemotional way and seems to be a rather lethargic character. Janet is a "brassy girl reporter", a dynamic character with a sense of autonomy. The role was reportedly intended for Dolores Fuller. According to Fuller's recollections, Loretta King bribed Wood into casting her as Janet, with promises of securing further funding for the film. Fuller was thus reduced to a cameo role. King vehemently denies bribing Wood in any way, so the story lacks confirmation. In a subplot of the film, there are storms every night for three months and strange weather patterns. The characters attribute the phenomenon to the effects the nuclear explosions have on the atmosphere. This probably reflects actual anxiety of the 1950s about potential climate change. Until the Partial Nuclear Test Ban Treaty (1963), atmospheric nuclear weapons testing was used widely and recklessly. Rob Craig suggests that the months of constant storms could be inspired by the Genesis flood narrative. In the context of the film, the strange weather is implied to be a side-effect of the experiments of Vornoff which apparently release radioactivity into the atmosphere. The dialogue of the film includes well-known lines such as ""Home? I have no home!"", ""One is always considered mad, when one discovers something which others cannot grasp"", and the closing ""He tampered in God's domain"". The phrases could well apply to the fates of avant-garde artists and thinkers. The title ""Bride of the Atom"", which Vornoff uses for Janet in the bridal dress, is inexplicable unless the scientist is actually attempting to use Janet to replace his long-lost wife. One of his reassuring lines to Janet concerning the experiment, ""It hurts, just for a moment, but then you will emerge a woman..."", sounds as if preparing her for the loss of her virginity. The scene of a young woman, in a bridal gown, restrained by leather shackles seems to be sadomasochistic in nature. Throughout the film, the mute Lobo is implied to have an unspecified intellectual disability and to be of sub-human intelligence. Yet he successfully operates complex machinery as if trained to do so. Craig views this scene as implying that supposedly "dumb" servants, can have a capacity of learning the secrets of their masters . The final scenes, with the mushroom cloud of the nuclear explosion, use stock footage from the blast of a thermonuclear weapon ("hydrogen bomb"). The apparent fetish of Lobo with angora wool is a reflection of Wood's own fetish for the material. This also serves as the film's connection to "Glen or Glenda" (1953), where the fetish plays a more prominent role. The character of Lobo also appeared again in Wood's "Night of the Ghouls". This film served as a sequel of sorts to "Bride". Vornoff is absent from the later film, but there are references to the activities of "the mad doctor". Tor Johnson also plays a character called Lobo in "The Unearthly" (1957) by Boris Petroff. This character also serves the main villain. This film is part of what Wood aficionados refer to as "The Kelton Trilogy", a trio of films featuring Paul Marco as Officer Kelton, a whining, reluctant policeman. The other two films are "Plan 9 from Outer Space" and "Night of the Ghouls". Kelton is the only character to appear in all three films. In 1986, the film was featured in the syndicated series, the "Canned Film Festival" and was later featured on the comedy series, "Mystery Science Theater 3000". The late 1990s dream trance track "Alright", by DJ Taucher, sampled a monologue from Bela Lugosi during the interlude of the song. In 2005, The Devil's Rejects Footage of the movie was played in the movie. In 2008, a colorized version was released by Legend Films. This version is also available from Amazon Video on Demand. In 2010, a retrospective on the movie entitled "Citizen Wood: Making ‘The Bride’, Unmaking the Legend" was included in the "Mystery Science Theater 3000" Volume 19 DVD set as a bonus feature for said episode featuring the movie. Horror host Mr. Lobo is among the interviewees of the 27 minute documentary. In 1980, the book "The Golden Turkey Awards" claims that Lugosi's character declares his manservant Lobo (Tor Johnson) "as harmless as kitchen" . This allegedly misspoken line is cited as evidence of either Lugosi's failing health/mental faculties, or as further evidence of Wood's incompetence as a director. However, a viewing of the film itself reveals that Lugosi said this line correctly, the exact words being, ""Don't be afraid of Lobo; he's as gentle as a kitten."" The easier explanation would be that authors Michael Medved and Harry Medved saw the film in a theater setting with inferior sound quality, or viewed a damaged print. A single viewing in such conditions could result in mishearing some lines of dialogue. Unfortunately the inaccurate claim managed to achieve urban legend status, and it keeps circulating. In 1994, the biopic "Ed Wood", directed by Tim Burton, alleged that Wood and the filmmakers stole the mechanical octopus (previously used in the film "Wake of the Red Witch") from the Republic Studios backlot, while failing to steal the motor which enabled the prop to move realistically, although, by the director's admission, the film preferred narrative interest over historical accuracy. These events are also alleged in the 2004 documentary, "The 50 Worst Movies Ever Made." However, other stories circulated insist Wood legitimately rented the octopus, along with some cars. To remedy the lack of movement from the octopus prop, whenever someone was killed by the monster in the film, they simply flailed around in the shallow water while holding the tentacles to imitate movement. The filming of these scenes, as well as the production of the film in general, were played to comic effect in "Ed Wood". Rudolph Grey's book "Nightmare of Ecstasy: The Life and Art of Edward D. Wood Jr." contains anecdotes regarding the making of this film. Grey notes that participants in the original events sometimes contradict one another, but he relates each person's information for posterity. He also includes Ed Wood's claim that one of his films made a profit and surmises that it was most likely "Bride of the Monster", but that Wood had oversold the film and could not reimburse the backers.
https://en.wikipedia.org/wiki?curid=5035
Berry paradox The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters). Bertrand Russell, the first to discuss the paradox in print, attributed it to G. G. Berry (1867–1928), a junior librarian at Oxford's Bodleian library. Consider the expression: Since there are only twenty-six letters in the English alphabet, there are finitely many phrases of under sixty letters, and hence finitely many positive integers that are defined by phrases of under sixty letters. Since there are infinitely many positive integers, this means that there are positive integers that cannot be defined by phrases of under sixty letters. If there are positive integers that satisfy a given property, then there is a "smallest" positive integer that satisfies that property; therefore, there is a smallest positive integer satisfying the property "not definable in under sixty letters". This is the integer to which the above expression refers. But the above expression is only fifty-seven letters long, therefore it "is" definable in under sixty letters, and is "not" the smallest positive integer not definable in under sixty letters, and is "not" defined by this expression. This is a paradox: there must be an integer defined by this expression, but since the expression is self-contradictory (any integer it defines is definable in under sixty letters), there cannot be any integer defined by it. Perhaps another helpful analogy to Berry's Paradox would be the phrase, "indescribable feeling". If the feeling is indeed indescribable, then no description of the feeling would be true. But if the word "indescribable" communicates something about the feeling, then it may be considered a description: this is self-contradictory. Mathematician and computer scientist Gregory J. Chaitin in "The Unknowable" (1999) adds this comment: "Well, the Mexican mathematical historian Alejandro Garcidiego has taken the trouble to find that letter [of Berry's from which Russell penned his remarks], and it is rather a different paradox. Berry’s letter actually talks about the first ordinal that can’t be named in a finite number of words. According to Cantor’s theory such an ordinal must exist, but we’ve just named it in a finite number of words, which is a contradiction." The Berry paradox as formulated above arises because of systematic ambiguity in the word "definable". In other formulations of the Berry paradox, such as one that instead reads: "...not nameable in less..." the term "nameable" is also one that has this systematic ambiguity. Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal. To resolve one of these paradoxes means to pinpoint exactly where our use of language went wrong and to provide restrictions on the use of language which may avoid them. This family of paradoxes can be resolved by incorporating stratifications of meaning in language. Terms with systematic ambiguity may be written with subscripts denoting that one level of meaning is considered a higher priority than another in their interpretation. "The number not nameable0 in less than eleven words" may be nameable1 in less than eleven words under this scheme. Using programs or proofs of bounded lengths, it is possible to construct an analogue of the Berry expression in a formal mathematical language, as has been done by Gregory Chaitin. Though the formal analogue does not lead to a logical contradiction, it does prove certain impossibility results. George Boolos (1989) built on a formalized version of Berry's paradox to prove Gödel's Incompleteness Theorem in a new and much simpler way. The basic idea of his proof is that a proposition that holds of "x" if and only if "x" = "n" for some natural number "n" can be called a "definition" for "n", and that the set {("n", "k"): "n" has a definition that is "k" symbols long} can be shown to be representable (using Gödel numbers). Then the proposition ""m" is the first number not definable in less than "k" symbols" can be formalized and shown to be a definition in the sense just stated. It is not possible in general to unambiguously define what is the minimal number of symbols required to describe a given string (given a specific description mechanism). In this context, the terms "string" and "number" may be used interchangeably, since a number is actually a string of symbols, e.g. an English word (like the word "eleven" used in the paradox) while, on the other hand, it is possible to refer to any word with a number, e.g. by the number of its position in a given dictionary or by suitable encoding. Some long strings can be described exactly using fewer symbols than those required by their full representation, as is often achieved using data compression. The complexity of a given string is then defined as the minimal length that a description requires in order to (unambiguously) refer to the full representation of that string. The Kolmogorov complexity is defined using formal languages, or Turing machines which avoids ambiguities about which string results from a given description. It can be proven that the Kolmogorov complexity is not computable. The proof by contradiction shows that if it were possible to compute the Kolmogorov complexity, then it would also be possible to systematically generate paradoxes similar to this one, i.e. descriptions shorter than what the complexity of the described string implies. That is to say, the definition of the Berry number is paradoxical because it is not actually possible to compute how many words are required to define a number, and we know that such computation is not possible because of the paradox.
https://en.wikipedia.org/wiki?curid=5036
List of Olympic medalists in biathlon This is the complete list of Olympic medalists in biathlon. Medalists in military patrol, a precursor to biathlon, are listed separately. The numbers in brackets denotes biathletes who won gold medal in corresponding disciplines more than one time. Bold numbers denotes record number of victories in certain disciplines. The numbers in brackets denotes biathletes who won gold medal in corresponding disciplines more than one time. Bold numbers denotes record number of victories in certain disciplines. The women's relay event has been competed over three different distances: * denotes all Olympics in which mentioned biathletes took part. Boldface denotes latest Olympics. Top 10 biathletes who won more gold medals at the Winter Olympics are listed below. Boldface denotes active biathletes and highest medal count among all biathletes (including these who not included in these tables) per type. * denotes only those Olympics at which mentioned biathletes won at least one medal
https://en.wikipedia.org/wiki?curid=5038
Biathlon World Championships The first Biathlon World Championships (BWCH) was held in 1958, with individual and team contests for men. The number of events has grown significantly over the years. Beginning in 1984, women biathletes had their own World Championships, and finally, from 1989, both genders have been participating in joint Biathlon World Championships. In 1978 the development was enhanced by the change from the large army rifle calibre to a small bore rifle, while the range to the target was reduced from 150 to 50 meters. The Biathlon World Championships of the season takes place during February or March. Some years it has been necessary to schedule parts of the Championships at other than the main venue because of weather and/or snow conditions. Full, joint Biathlon World Championships have never been held in Olympic Winter Games seasons. Biathlon World Championships in non-IOC events, however, have been held in Olympic seasons. In 2005, the then new event of Mixed Relay (two legs done by women, two legs by men) was arranged separately from the ordinary Championships. Arranged Championships: Upcoming: Bold numbers in brackets denotes record number of victories in corresponding disciplines. This event was first held in 1958. Medal table This event was first held in 1974. Medal table This event was first held in 1997. Medal table This event was first held in 1999. Medal table This event was first held unofficially in 1965. It was a success, and replaced the team competition as an official event in 1966. Medal table This event was held from 1958 to 1965. The times of the top 3 athletes from each country in the 20 km individual were added together (in 1958 the top 4). Medal table This event, a patrol race, was held from 1989 to 1998. 1989–93: 20 km. 1994–98: 10 km. Medal table Bold numbers in brackets denotes record number of victories in corresponding disciplines. This event was first held in 1984. Through 1988 the distance was 10 km. Medal table This event was first held in 1984. Through 1988 the distance was 5 km. Medal table This event was first held in 1997. Medal table This event was first held in 1999. Medal table This event was first held in 1984. Through 1988, the event was 3 × 5 km. 1989–91: 3 × 7.5 km. 1993–2001: 4 × 7.5 km. In 2003, the leg distance was set to 6 km. Medal table This event, a patrol race, was held from 1989 to 1998. 1989–93: 15 km. 1994–98: 7.5 km. Medal table Bold numbers in brackets denotes record number of victories in corresponding disciplines. This event was first held in 2005, at the Biathlon World Cup finals in Khanty-Mansiysk. In 2005 the women biathletes did the first two legs, and the men did the following two, while in 2006 the sequence was woman–man–woman–man. At the Biathlon World Championships 2007 in Antholz, the sequence was women–women–man–man. The ski legs of 6 km each (in 2007–19 men ski legs: 7.5 km). From 2007 only one team per nation is allowed to compete. Medal table This event was first held in 2019. Medal table Updated after the 2020 Championships. Boldface denotes active biathletes and highest medal count among all biathletes (including these who not included in these tables) per type.
https://en.wikipedia.org/wiki?curid=5039
Belfast Belfast ( ; , ) is the capital and largest city of Northern Ireland, standing on the banks of the River Lagan on the east coast. It is the 12th-largest city in the United Kingdom and the second-largest on the island of Ireland. It had a population of 333,871 . Belfast suffered greatly in the Troubles: in the 1970s and 1980s it was one of the world's most dangerous cities, with a homicide rate around 31 per 100,000. By the early 19th century, Belfast became a major port. It played an important role in the Industrial Revolution in Ireland, becoming briefly the biggest linen-producer in the world, earning it the nickname "Linenopolis". By the time it was granted city status in 1888, it was a major centre of Irish linen production, tobacco-processing and rope-making. Shipbuilding was also a key industry; the Harland and Wolff shipyard, which built the , was the world's largest shipyard. Belfast has a major aerospace and missiles industry. Industrialisation, and the inward migration it brought, made Belfast Northern Ireland's biggest city and it became the de facto capital of Northern Ireland following the partition of Ireland in 1922. Its status as a global industrial centre ended in the decades after the Second World War of 1939–1945. Belfast is still a port with commercial and industrial docks, including the Harland and Wolff shipyard, dominating the Belfast Lough shoreline. It is served by two airports: George Best Belfast City Airport and Belfast International Airport west of the city. The Globalization and World Cities Research Network (GaWC) listed Belfast as a Gamma global city in 2018. The name Belfast is derived from the Irish ', which was later spelt '. The word ' means "mouth" or "rivermouth" while ' is the genitive singular of ' and refers to a sandbar or tidal ford across a river's mouth. The name therefore translates literally as "(river) mouth of the sandbar" or "(river) mouth of the ford". This sandbar was formed at the confluence of two rivers at what is now Donegall Quay: the Lagan, which flows into Belfast Lough, and its tributary the Farset. This area was the hub around which the original settlement developed. The Irish name ' is shared by a townland in County Mayo, whose name has been anglicised as "Belfarsad". An alternative interpretation of the name is "mouth of [the river] of the sandbar", an allusion to the River Farset, which flows into the Lagan where the sandbar was located. This interpretation was favoured by Edmund Hogan and John O'Donovan. It seems clear, however, that the river itself was named after the tidal crossing. In Ulster-Scots, the name of the city has been variously translated as "Bilfawst", "Bilfaust" or "Baelfawst", although "Belfast" is also used. The county borough of Belfast was created when it was granted city status by Queen Victoria in 1888, and the city continues to straddle County Antrim and County Down. The site of Belfast has been occupied since the Bronze Age. The Giant's Ring, a 5,000-year-old henge, is located near the city, and the remains of Iron Age hill forts can still be seen in the surrounding hills. Belfast remained a small settlement of little importance during the Middle Ages. John de Courcy built a castle on what is now Castle Street in the city centre in the 12th century, but this was on a lesser scale and not as strategically important as Carrickfergus Castle to the north, which was built by de Courcy in 1177. The O'Neill clan had a presence in the area. In the 14th century, Cloinne Aodha Buidhe, descendants of Aodh Buidhe O'Neill, built Grey Castle at Castlereagh, now in the east of the city. Conn O'Neill of the Clannaboy O'Neills owned vast lands in the area and was the last inhabitant of Grey Castle, one remaining link being the Conn's Water river flowing through east Belfast. Belfast became a substantial settlement in the 17th century after being established as an English town by Sir Arthur Chichester. As it grew with the port, and with textile manufacture, the English element was overwhelmed by the influx of Scottish Presbyterians. As "Dissenters" from the established Church of Ireland communion, the Presbyterians were conscious of sharing, if only in part, the disabilities of Ireland's largely dispossessed Roman Catholic majority. When, in the American War Independence, Belfast Lough was raided by the privateer, John Paul Jones, the townspeople assembled their own Volunteer militia. This emboldened a spirit a radical disaffection. Further enthused by the French Revolution, the Volunteers and townspeople rallied in support of Catholic emancipation and "a more equal representation of the people" in the Irish Parliament. The two MPs Belfast returned to Dublin had remained nominees of the Chichesters (Marquesses of Donegall). In the face of the Ascendancy's intransigence, these were demands taken up by the Society of United Irishmen formed at a meeting in the town addressed by Theobald Wolfe Tone. In the expectation of French assistance the Society organised a republican insurrection, defeated to the north and south of Belfast, at Antrim and Ballynahinch, in 1798. Evidence of this period of Belfast's growth can still be seen in the oldest areas of the city, known as the Entries. Rapid industrial growth in the nineteenth century drew in landless Catholics from outlying rural and western districts, most settling to the west of the town. The plentiful supply of cheap labour helped attract the English and Scottish capital to Belfast, but it was also a cause of insecurity. Protestant workers organised to protect "their" jobs giving a new lease of life in the town to the once largely rural Orange Order. Sectarian tensions were heightened by movements to repeal the Acts of Union and to restore a Parliament in Dublin. Given the progressive enlargement of the British electoral franchise, this would have had an overwhelming Catholic majority and, it was widely believed, interests inimical to the Protestant and industrial north. In 1864 and 1886 the issue had helped trigger deadly sectarian riots. Sectarian tension was not in itself unique to Belfast: it was shared with Liverpool and Glasgow, cities that following the Great Famine had also experienced large scale Irish Catholic immigration. But also common to this "industrial triangle" were traditions of labour militancy. In 1919, workers in all three cities struck for a ten-hour reduction in the working week. In Belfast—notwithstanding the political friction caused by Sinn Féin's electoral triumph in the south—this involved some 60,000 workers, Protestant and Catholic, in a four-week walk-out. In a demonstration of their resolve not to submit to a Dublin parliament, in 1912 Belfast City Hall Unionists presented the Ulster Covenant, which, with an associated Declaration for women, was to accumulate over 470,000 signatures. This was followed by the drilling and eventual arming of a 100,000 strong Ulster Volunteer Force. The crisis was abated by the onset of the Great War, the sacrifices of the UVF in which continue to be commemorated in the city (Somme Day) by Unionist and Loyalist organisations. In 1921, as the greater part of Ireland seceded as the Irish Free State, Belfast became the capital of the six counties remaining as Northern Ireland in the United Kingdom. In 1932 the devolved parliament for the region was housed in new buildings at Stormont on the eastern edge of the city. In 1920–21, as the two parts of Ireland drew apart, up to 500 people were killed in disturbances in Belfast, the bloodiest period of strife in the city until the Troubles of the late 1960s onwards. Belfast was heavily bombed during World War II. Initial raids were a surprise as the city was believed to be outside of the range of German bomber planes. In one raid, in 1941, German bombers killed around one thousand people and left tens of thousands homeless. Apart from London, this was the greatest loss of life in a night raid during the Blitz. Belfast has been the capital of Northern Ireland since its establishment in 1921 following the Government of Ireland Act 1920. It had been the scene of various episodes of sectarian conflict between its Catholic and Protestant populations. These opposing groups in this conflict are now often termed republican and loyalist respectively, although they are also loosely referred to as 'nationalist' and 'unionist'. The most recent example of this conflict was known as the Troubles – a civil conflict that raged from around 1969 to 1998. Belfast saw some of the worst of the Troubles in Northern Ireland, particularly in the 1970s, with rival paramilitary groups formed on both sides. Bombing, assassination and street violence formed a backdrop to life throughout the Troubles. In December 1971, 15 people, including two children, were killed when the Ulster Volunteer Force (UVF) bombed McGurk's Bar, the greatest loss of life in a single incident in Belfast. The Provisional IRA detonated 22 bombs within the confines of Belfast city centre on 21 July 1972, on what is known as "Bloody Friday", killing nine people. Loyalist paramilitaries including the UVF and the Ulster Defence Association (UDA) said that the killings they carried out were in retaliation for the IRA campaign. Most of their victims were Catholics with no links to the Provisional IRA. A particularly notorious group, based on the Shankill Road in the mid-1970s, became known as the Shankill Butchers. During the Troubles the Europa Hotel suffered 36 bomb attacks becoming known as "the most bombed hotel in the world" In all, over 1,600 people were killed in political violence in the city between 1969 and 2001. Belfast city centre has undergone expansion and regeneration since the late 1990s, notably around Victoria Square. In late 2018, it was announced that Belfast would undergo a £500 million urban regeneration project known as "Tribeca" on a large city centre site. However, tensions and civil disturbances still occur despite the 1998 peace agreement, including sectarian riots and paramilitary attacks. Belfast and the Causeway Coast were together named the best place to visit in 2018 by Lonely Planet. Tourist numbers have increased since the end of The Troubles, boosted in part by newer attractions such as Titanic Belfast and tours of locations used in the HBO television series "Game of Thrones". Belfast was granted borough status by James VI and I in 1613 and official city status by Queen Victoria in 1888. Since 1973 it has been a local government district under local administration by Belfast City Council. Belfast is represented in both the British House of Commons and in the Northern Ireland Assembly. For elections to the European Parliament, Belfast is within the Northern Ireland constituency. Belfast City Council is the local council with responsibility for the city. The city's elected officials are the Lord Mayor of Belfast, Deputy Lord Mayor and High Sheriff who are elected from among 60 councillors. The first Lord Mayor of Belfast was Daniel Dixon, who was elected in 1892. The Lord Mayor for 2019–20 is John Finucane Sinn Féin, while the Deputy Lord Mayor is an Alliance Party of Northern Ireland councillor. The Lord Mayor's duties include presiding over meetings of the council, receiving distinguished visitors to the city, representing and promoting the city on the national and international stage. In 1997, Unionists lost overall control of Belfast City Council for the first time in its history, with the Alliance Party of Northern Ireland gaining the balance of power between Nationalists and Unionists. This position was confirmed in four subsequent council elections, with mayors from Sinn Féin and the Social Democratic and Labour Party (SDLP), both of whom are Nationalist parties, and the cross-community Alliance Party regularly elected since. The first nationalist Lord Mayor of Belfast was Alban Maginness of the SDLP, in 1997. Belfast council takes part in the twinning scheme, and is twinned with Nashville, in the United States, Hefei in China, and Boston, in the United States. As Northern Ireland's capital city, Belfast is host to the Northern Ireland Assembly at Stormont, the site of the devolved legislature for Northern Ireland. Belfast is divided into four Northern Ireland Assembly and UK parliamentary constituencies: Belfast North, Belfast West, Belfast South and Belfast East. All four extend beyond the city boundaries to include parts of Castlereagh, Lisburn and Newtownabbey districts. In the Northern Ireland Assembly Elections in 2017, Belfast elected 20 Members of the Legislative Assembly (MLAs), 5 from each constituency. Belfast elected 7 Sinn Féin, 5 DUP, 2 SDLP, 3 Alliance Party, 1 UUP, 1 Green and 1 PBPA MLAs. In the 2017 UK general election, Belfast elected one MP from each constituency to the House of Commons at Westminster, London. This comprised 3 DUP and 1 Sinn Féin. In the 2019 UK general election, the DUP lost two of their seats in Belfast; to Sinn Féin in North Belfast and to the SDLP in South Belfast. Belfast is at the western end of Belfast Lough and at the mouth of the River Lagan giving it the ideal location for the shipbuilding industry that once made it famous. When the "Titanic" was built in Belfast in 1911–1912, Harland and Wolff had the largest shipyard in the world. Belfast is situated on Northern Ireland's eastern coast at . A consequence of this northern latitude is that it both endures short winter days and enjoys long summer evenings. During the winter solstice, the shortest day of the year, local sunset is before 16:00 while sunrise is around 08:45. This is balanced by the summer solstice in June, when the sun sets after 22:00 and rises before 05:00. In 1994, a weir was built across the river by the Laganside Corporation to raise the average water level so that it would cover the unseemly mud flats which gave Belfast its name (). The area of Belfast Local Government District is . The River Farset is also named after this silt deposit (from the Irish "feirste" meaning "sand spit"). Originally a more significant river than it is today, the Farset formed a dock on High Street until the mid 19th century. Bank Street in the city centre referred to the river bank and Bridge Street was named for the site of an early Farset bridge. Superseded by the River Lagan as the more important river in the city, the Farset now languishes in obscurity, under High Street. There are no less than eleven other minor rivers in and around Belfast, namely the Blackstaff, the Colin, the Connswater, the Cregagh, the Derriaghy, the Forth, the Knock, the Legoniel, the Milewater, the Purdysburn and the Ravernet. The city is flanked on the north and northwest by a series of hills, including Divis Mountain, Black Mountain and Cavehill, thought to be the inspiration for Jonathan Swift's "Gulliver's Travels". When Swift was living at Lilliput Cottage near the bottom of Belfast's Limestone Road, he imagined that the Cavehill resembled the shape of a sleeping giant safeguarding the city. The shape of the giant's nose, known locally as "Napoleon's Nose", is officially called McArt's Fort probably named after Art O'Neill, a 17th-century chieftain who controlled the area at that time. The Castlereagh Hills overlook the city on the southeast. As with the vast majority of the rest of Ireland, Belfast has a temperate oceanic climate ("Cfb" in the Koeppen climate classification), with a narrow range of temperatures and rainfall throughout the year. The climate of Belfast is significantly milder than most other locations in the world at a similar latitude, due to the warming influence of the Gulf Stream. There are currently five weather observing stations in the Belfast area: Helens Bay, Stormont, Newforge, Castlereagh, and Ravenhill Road. Slightly further afield is Aldergrove Airport. The highest temperature recorded at any official weather station in the Belfast area was at Shaws Bridge on 12 July 1983. The city gets significant precipitation (greater than 1mm) on 157 days in an average year with an average annual rainfall of , less than areas of northern England or most of Scotland, but higher than Dublin or the south-east coast of Ireland. As an urban and coastal area, Belfast typically gets snow on fewer than 10 days per year. The absolute maximum temperature at the weather station at Stormont is , set during July 1983. In an average year the warmest day will rise to a temperature of with a day of or above occurring roughly once every two in three years. The absolute minimum temperature at Stormont is , during January 1982, although in an average year the coldest night will fall no lower than with air frost being recorded on just 26 nights. The lowest temperature to occur in recent years was on 22 December 2010. The nearest weather station for which sunshine data and longer term observations are available is Belfast International Airport (Aldergrove). Temperature extremes here have slightly more variability due to the more inland location. The average warmest day at Aldergrove for example will reach a temperature of , ( higher than Stormont) and 2.1 days should attain a temperature of or above in total. Conversely the coldest night of the year averages (or lower than Stormont) and 39 nights should register an air frost. Some 13 more frosty nights than Stormont. The minimum temperature at Aldergrove was , during December 2010. Belfast expanded very rapidly from being a market town to becoming an industrial city during the course of the 19th century. Because of this, it is less an agglomeration of villages and towns which have expanded into each other, than other comparable cities, such as Manchester or Birmingham. The city expanded to the natural barrier of the hills that surround it, overwhelming other settlements. Consequently, the arterial roads along which this expansion took place (such as the Falls Road or the Newtownards Road) are more significant in defining the districts of the city than nucleated settlements. Belfast remains segregated by walls, commonly known as "peace lines", erected by the British Army after August 1969, and which still divide 14 districts in the inner city. In 2008 a process was proposed for the removal of the 'peace walls'. In June 2007, a £16 million programme was announced which will transform and redevelop streets and public spaces in the city centre. Major arterial roads (quality bus corridor) into the city include the Antrim Road, Shore Road, Holywood Road, Newtownards Road, Castlereagh Road, Cregagh Road, Ormeau Road, Malone Road, Lisburn Road, Falls Road, Springfield Road, Shankill Road, and Crumlin Road, Four Winds. Belfast city centre is divided into two postcode districts, "BT1" for the area lying north of the City Hall, and "BT2" for the area to its south. The industrial estate and docklands "BT3". The rest of the Belfast post town is divided in a broadly clockwise system from "BT3" in the north-east round to "BT15", with "BT16" and "BT17" further out to the east and west respectively. Although "BT" derives from "Belfast", the BT postcode area extends across the whole of Northern Ireland. Since 2001, boosted by increasing numbers of tourists, the city council has developed a number of cultural quarters. The Cathedral Quarter takes its name from St Anne's Cathedral (Church of Ireland) and has taken on the mantle of the city's key cultural locality. It hosts a yearly visual and performing arts festival. Custom House Square is one of the city's main outdoor venues for free concerts and street entertainment. The Gaeltacht Quarter is an area around the Falls Road in west Belfast which promotes and encourages the use of the Irish language. The Queen's Quarter in south Belfast is named after Queen's University. The area has a large student population and hosts the annual Belfast International Arts Festival each autumn. It is home to Botanic Gardens and the Ulster Museum, which was reopened in 2009 after major redevelopment. The Golden Mile is the name given to the mile between Belfast City Hall and Queen's University. Taking in Dublin Road, Great Victoria Street, Shaftesbury Square and Bradbury Place, it contains some of the best bars and restaurants in the city. Since the Good Friday Agreement in 1998, the nearby Lisburn Road has developed into the city's most exclusive shopping strip.
https://en.wikipedia.org/wiki?curid=5046
Biotite Biotite is a common group of phyllosilicate minerals within the mica group, with the approximate chemical formula . It is primarily a solid-solution series between the iron-endmember annite, and the magnesium-endmember phlogopite; more aluminous end-members include siderophyllite and eastonite. Biotite was regarded as a mineral "species" by the International Mineralogical Association until 1998, when its status was changed to a mineral "group". The term "biotite" is still used to describe unanalysed dark micas in the field. Biotite was named by J.F.L. Hausmann in 1847 in honor of the French physicist Jean-Baptiste Biot, who performed early research into the many optical properties of mica. Members of the biotite group are sheet silicates. Iron, magnesium, aluminium, silicon, oxygen, and hydrogen form sheets that are weakly bound together by potassium ions. The term "iron mica" is sometimes used for iron-rich biotite, but the term also refers to a flaky micaceous form of haematite, and the field term Lepidomelane for unanalysed iron-rich Biotite avoids this ambiguity. Biotite is also sometimes called "black mica" as opposed to "white mica" (muscovite) – both form in the same rocks, and in some instances side-by-side. Like other mica minerals, biotite has a highly perfect basal cleavage, and consists of flexible sheets, or lamellae, which easily flake off. It has a monoclinic crystal system, with tabular to prismatic crystals with an obvious pinacoid termination. It has four prism faces and two pinacoid faces to form a pseudohexagonal crystal. Although not easily seen because of the cleavage and sheets, fracture is uneven. It appears greenish to brown or black, and even yellow when weathered. It can be transparent to opaque, has a vitreous to pearly luster, and a grey-white streak. When biotite crystals are found in large chunks, they are called "books" because they resemble books with pages of many sheets. The color of biotite is usually black and the mineral has a hardness of 2.5–3 on the Mohs scale of mineral hardness. Biotite dissolves in both acid and alkaline aqueous solutions, with the highest dissolution rates at low pH. However, biotite dissolution is highly anisotropic with crystal edge surfaces ("h k"0) reacting 45 to 132 times faster than basal surfaces (001). In thin section, biotite exhibits moderate relief and a pale to deep greenish brown or brown color, with moderate to strong pleochroism. Biotite has a high birefringence which can be partially masked by its deep intrinsic color. Under cross-polarized light, biotite exhibits extinction approximately parallel to cleavage lines, and can have characteristic bird's eye extinction, a mottled appearance caused by the distortion of the mineral's flexible lamellae during grinding of the thin section. Basal sections of biotite in thin section are typically approximately hexagonal in shape and usually appear isotropic under cross-polarized light. Members of the biotite group are found in a wide variety of igneous and metamorphic rocks. For instance, biotite occurs in the lava of Mount Vesuvius and in the Monzoni intrusive complex of the western Dolomites. Biotite in granite tends to be poorer in magnesium than the biotite found in its volcanic equivalent, rhyolite. Biotite is an essential phenocryst in some varieties of lamprophyre. Biotite is occasionally found in large cleavable crystals, especially in pegmatite veins, as in New England, Virginia and North Carolina USA. Other notable occurrences include Bancroft and Sudbury, Ontario Canada. It is an essential constituent of many metamorphic schists, and it forms in suitable compositions over a wide range of pressure and temperature. It has been estimated that biotite comprises up to 7% of the exposed continental crust. An igneous rock composed almost entirely of dark mica (biotite or phlogopite) is known as a "glimmerite" or "biotitite". Biotite may be found in association with its common alteration product chlorite. The largest documented single crystals of biotite were approximately sheets found in Iveland, Norway. Biotite is used extensively to constrain ages of rocks, by either potassium-argon dating or argon–argon dating. Because argon escapes readily from the biotite crystal structure at high temperatures, these methods may provide only minimum ages for many rocks. Biotite is also useful in assessing temperature histories of metamorphic rocks, because the partitioning of iron and magnesium between biotite and garnet is sensitive to temperature.
https://en.wikipedia.org/wiki?curid=5047
Brigham Young Brigham Young (; June 1, 1801August 29, 1877) was an American religious leader, politician, and settler. He was the second president of The Church of Jesus Christ of Latter-day Saints (LDS Church) from 1847 until his death in 1877. He founded Salt Lake City and he served as the first governor of the Utah Territory. Young also led the foundings of the precursors to the University of Utah and Brigham Young University. Young had many nicknames, among the most popular being "American Moses" (alternatively, the "Modern Moses" or "Mormon Moses"), because, like the biblical figure, Young led his followers, the Mormon pioneers, in an exodus through a desert, to what they saw as a promised land. Young was dubbed by his followers the "Lion of the Lord" for his bold personality and commonly was called "Brother Brigham" by Latter-day Saints. A polygamist, Young had 55 wives. He instituted a church ban against conferring the priesthood on men of black African descent, and also led the church during the Utah War against the United States. Young was born the eighth child of John Young and Abigail "Nabby" Howe, a farming family in Whitingham, Vermont. When he was three his family moved to upstate New York settling in Sherburne, New York. At age 12 he moved with his parents to Aurelius, New York close to Cayuga Lake. When he was 14 his mother died of tuberculosis. After that he moved with his father to Tyrone, New York. At age 16, Young's father made him leave home. He first worked odd jobs and then became an apprentice to a John C. Jeffries in Auburn, New York. He worked as a carpenter, joiner, glazer and painter. One home Young helped paint in Auburn was that of Elijah Miller, which later became the residence of William Seward. It is now a local museum (see William Seward House). It is also claimed by locals that the fireplace mantle of this house was created by Young. With the onset of the depression of 1819 Jeffries dismissed Young from his apprenticeship and Young moved to Port Byron, New York. Young had converted to the Reformed Methodist Church in 1824. This was after a period of deep reading of the Bible. He insisted when joining the Methodists on being baptized by immersion instead of their normal practice of sprinkling. Young was first married in 1824 to Miriam Angeline Works, whom he had met in Port Byron. They first lived in a small unpainted house adjacent to the pail factory which was at the time Young's main place of employment. Also in Port Byron, Young joined a debating society. Shortly after the birth of their first daughter the family moved to Oswego, New York on the shores of Lake Ontario. Later on in 1828 they moved to Mendon, New York. Most of Young's siblings had already moved to Mendon, or did so shortly after he moved there. It was here he first became friends with Heber C. Kimball. Here he worked as a carpenter and joiner and built a saw mill that he operated. In 1832, Miriam died and Young and his two young daughters moved into the household of Kimball and his wife, Vilate. By this point Young had for all intents and purposes left the Reformed Methodist, becoming a Christian seeker, unconvinced that he had found a church with the true authority of Jesus Christ. As early as 1830, Young was introduced to the Book of Mormon by way of a copy his brother, Phineas H., had obtained from Samuel H. Smith. In 1831, five missionaries of the Latter Day Saint movement (Eleazer Miller, Elial Strong, Alpheus Gifford, Enos Curtis, and Daniel Bowen) came from the branch of the church in Columbia, Pennsylvania to preach in Mendon. A key attraction of the teachings of this group to Young was their practicing of spiritual gifts. This was partly experienced when Young traveled with his wife and Kimball to visit the branch of the church in Columbia, Pennsylvania. Young was drawn to the new church after reading the Book of Mormon. He officially joined the Church of Christ on April 14, 1832, being baptized by Eleazer Miller. A branch of the church was organized in Mendon, and Young was one of the regular preachers to the branch. He quickly expanded his area of sharing the gospel of Jesus Christ, traveling southwest to Warsaw, New York and southeast to various towns along Lake Canandaigua. Shortly after this, Young saw Alpheus Gifford speak in tongues and in response Young also spoke in an unknown language. In November 1832, Young travelled with Kimball to Kirtland, Ohio and visited Joseph Smith. During this trip Young spoke in a tongue that was identified by Smith as the "Adamic language". In December 1832, Young left his daughters with the Kimballs and set out on a mission with his brother, Joseph, to Upper Canada, primarily to what is now Kingston, Ontario. Later they extended their preaching to various towns along the north shore of Lake Erie. In February 1833, they returned to Mendon. A few months later Young again set out on a mission with his brother, Joseph, this time traveling into the north of New York and then on into modern Ontario. In the summer of 1833, Young moved to Kirtland, Ohio. Here he met Mary Ann Angell and they were married on February 18, 1834. In Kirtland, Young continued to preach the gospel; in fact Mary Ann first encountered him through hearing him preach. Young also resumed work on building houses. In May 1834, Young became a member of Zion's Camp. He traveled to Missouri and was part of it until it disbanded on July 3, 1834. After his return to Kirtland, Young focused his carpentry work on the Kirtland Temple and also prepared for the birth of his third child, his first son, Joseph A. Young. Mary Ann had largely provided for Young's two daughters on her own while pregnant with her first child while Young was away with Zion's Camp. In Kirtland, Young was involved in adult education including studying in a Hebrew language class under Joshua Sexias. Young was ordained a member of the original Quorum of the Twelve Apostles in May 1835. Later that month, Young left with the other members of the Quorum of the Twelve on a proselytizing mission to New York state and New England. In August 1835, Young and the rest of the Quorum of the Twelve issued a testimony in support of the divine origin of the Doctrine and Covenants. He was then involved in the dedication of the Kirtland Temple in 1836. Shortly after this Young went on another mission with his brother, Joseph, to New York and New England. On this mission he visited the family of his aunt, Rhoda Howe Richards. They converted to the church, including his cousin Willard Richards. He then returned to Kirtland where he remained until events related to anger over the failure of the Kirtland Safety Society forced him to flee the community in December 1837. He then stayed for a short time in Dublin, Indiana with his brother, Lorenzo, and then moved on to Caldwell County, Missouri. Young became the quorum president in March 1839. Under his direction, the quorum served a mission to the United Kingdom and organized the exodus of Latter Day Saints from Missouri in 1838. In 1844, while in jail awaiting trial for treason charges, the church's president, Joseph Smith was killed by an armed mob. Several claimants to the role of church president emerged during the succession crisis that ensued. Before a large meeting convened to discuss the succession in Nauvoo, Illinois, Sidney Rigdon, the senior surviving member of the church's First Presidency, argued there could be no successor to the deceased prophet and that he should be made the "Protector" of the church. Young opposed this reasoning and motion. Smith had earlier recorded a revelation which stated the Quorum of the Twelve was "equal in authority and power" to the First Presidency, so Young claimed that the leadership of the church fell to the Twelve Apostles. The majority in attendance were persuaded that the Quorum of the Twelve was to lead the church, with Young as the quorum's president. Many of Young's followers would later reminisce that while Young spoke to the congregation, he looked or sounded exactly like Smith, which they attributed to the power of God. Young was ordained President of the Church in December 1847, three and a half years after Smith's death. Rigdon became the president of a separate church organization based in Pittsburgh, Pennsylvania, and other potential successors emerged to lead what became other denominations of the movement. Repeated conflict led Young to relocate his group of Latter-day Saints to the Salt Lake Valley, which was then part of Mexico. Young organized the journey that would take the Mormon pioneers to Winter Quarters, Nebraska, in 1846, then to the Salt Lake Valley. By the time Young arrived at the final destination, it had come under American control as a result of war with Mexico, although U.S. sovereignty would not be confirmed until 1848. Young arrived in the Salt Lake Valley on July 24, 1847, a date now recognized as Pioneer Day in Utah. Young's expedition was one of the largest and one of the best organized westward treks. On August 22, 29 days after arriving in the Salt Lake Valley, Young organized the Mormon Tabernacle Choir. After three years of leading the church as the President of the Quorum of the Twelve Apostles, Young reorganized a new First Presidency and was sustained as the second president of the church on December 27, 1847. As colonizer and founder of Salt Lake City, Young was appointed the territory's first governor and superintendent of American Indian affairs by President Millard Fillmore on February 3, 1851. During his time as prophet, Young directed the establishment of settlements throughout present-day Utah, Idaho, Arizona, Nevada, California and parts of southern Colorado and northern Mexico. Under his direction, the Mormons built roads and bridges, forts, irrigation projects; established public welfare; organized a militia; issued an extermination order against the Timpanogos and after a series of wars eventually made peace with the Native Americans. Young was also one of the first to subscribe to Union Pacific stock, for the construction of the First Transcontinental Railroad. Young organized the first legislature and established Fillmore as the territory's first capital. Young organized a board of regents to establish a university in the Salt Lake Valley. It was established on February 28, 1850, as the University of Deseret; its name was eventually changed to the University of Utah. In 1851, Young and several federal officials, including territorial Secretary Broughton Harris, became unable to work cooperatively. Harris and the others departed Utah without replacements being named, and these individuals later became known as the Runaway Officials of 1851. Young supported slavery and its expansion into Utah, and led the efforts to legalize and regulate slavery in the 1852 Act in Relation to Service, based on his beliefs on slavery. Young said in an 1852 speech, “In as much as we believe in the Bible ... we must believe in slavery. This colored race have been subjected to severe curses ... which they have brought upon themselves.” In 1856, Young organized an efficient mail service. In 1858, following the events of the Utah War, he stepped down to his successor, Alfred Cumming. Young was the longest-serving president of the LDS Church in history, having served for 29 years. On October 16, 1875, Young deeded buildings and land in Provo, Utah to a board of trustees for establishing an institution of learning, ostensibly as part of the University of Deseret. Young said, "I hope to see an Academy established in Provo ... at which the children of the Latter-day Saints can receive a good education unmixed with the pernicious atheistic influences that are found in so many of the higher schools of the country." The school broke off from the University of Deseret and became Brigham Young Academy, the precursor to Brigham Young University. Within the church, Young reorganized the Relief Society for women in 1867, and he created organizations for young women in 1869 and young men in 1875. Young was involved in temple building throughout his membership in the LDS Church, making it a priority of his church presidency. Under Smith's leadership, Young participated in the building of the Kirtland and Nauvoo temples. Just four days after arriving in the Salt Lake Valley, Young designated the location for the Salt Lake Temple; he presided over its groundbreaking on April 6, 1853. During his tenure, Young oversaw construction of the Salt Lake Tabernacle and he announced plans to build the St. George (1871), Manti (1875), and Logan (1877) temples. He also provisioned the building of the Endowment House, a "temporary temple" which began to be used in 1855 to provide temple ordinances to church members while the Salt Lake Temple was under construction. The majority of Young's teachings are contained in the 19 volumes of transcribed and edited sermons in the Journal of Discourses. The LDS Church's Doctrine and Covenants contains one section from Young that has been canonized as scripture, adding the section in 1876. Though polygamy was practiced by Young's predecessor Joseph Smith, the practice is often associated with Young. Some Latter Day Saint denominations, such as the Community of Christ, consider Young the "Father of Mormon Polygamy". In 1853, Young made the church's first official statement on the subject since the church had arrived in Utah. Young acknowledged the suffering the doctrine created for women, but stated its necessity for creating large families, proclaiming: "But the first wife will say, 'It is hard, for I have lived with my husband twenty years, or thirty, and have raised a family of children for him, and it is a great trial to me for him to have more women;' then I say it is time that you gave him up to other women who will bear children." One of the more controversial teachings of Young was the Adam–God doctrine. According to Young, he was taught by Smith that Adam is "our Father and our God, and the only God with whom we have to do". According to the doctrine, Adam was once a mortal man who became resurrected and exalted. From another planet, Adam brought Eve, one of his wives, with him to the earth, where they became mortal by eating the fruit of the Garden of Eden. After bearing mortal children and establishing the human race, Adam and Eve returned to their heavenly thrones where Adam acts as the god of this world. Later, as Young is generally understood to have taught, Adam returned to the earth to become the biological father of Jesus. The LDS Church has since repudiated the Adam–God doctrine. Young is generally considered to have instituted a church ban against conferring the priesthood on men of black African descent, who had been treated equally in this respect under Smith's presidency. After settling in Utah in 1848, Young announced the ban, which also forbade blacks from participating in Mormon temple rites such as the endowment or sealings. On many occasions, Young taught that blacks were denied the priesthood because they were "the seed of Cain", but also stated that they would eventually receive the priesthood after "all the other children of Adam have the privilege of receiving the Priesthood, and of coming into the kingdom of God, and of being redeemed from the four-quarters of the earth, and have received their resurrection from the dead, then it will be time enough to remove the curse from Cain and his posterity." These racial restrictions remained in place until 1978, when the policy was rescinded by LDS Church president Spencer W. Kimball, and the LDS Church subsequently "disavow[ed] theories advanced in the past" to explain this ban, thereby "plac[ing] the origins of black priesthood denial blame squarely on Brigham Young." In 1863, Young stated: "Shall I tell you the law of God in regard to the African race? If the white man who belongs to the chosen seed mixes his blood with the seed of Cain, the penalty, under the law of God, is death on the spot. This will always be so." Young was a vocal opponent of theories of human polygenesis, being a firm voice for stating that all humans were the product of one creation. Shortly after the arrival of Young's pioneers, the new Mormon colonies were incorporated into the United States through the Mexican Cession. Young petitioned the U.S. Congress to create the State of Deseret. The Compromise of 1850 instead carved out Utah Territory and Young was installed as governor. As governor and church president, Young directed both religious and economic matters. He encouraged independence and self-sufficiency. Many cities and towns in Utah, and some in neighboring states, were founded under Young's direction. Young's leadership style has been viewed as autocratic. When federal officials received reports of widespread and systematic obstruction of federal officials in Utah (most notably judges), U.S. President James Buchanan decided to install a non-Mormon governor. Buchanan accepted the reports of the judges without any further investigation, and the new non-sectarian governor was accompanied by troops sent to garrison forts in the new territory. When Young received word that federal troops were headed to Utah with his replacement, he called out his militia to ambush the federal force. During the defense of Utah, now called the Utah War, Young held the U.S. Army at bay for a winter by taking their cattle and burning supply wagons. The Mormon forces were largely successful thanks to Lot Smith. Young eventually relented and agreed to step down as governor. He later received a pardon from Buchanan. Relations between Young and future governors and U.S. Presidents were mixed. The degree of Young's involvement in the Mountain Meadows massacre, which took place in Washington County in 1857, is disputed. Leonard J. Arrington reports that Young received a rider at his office on the day of the massacre, and that when he learned of the contemplated attack by the members of the LDS Church in Parowan and Cedar City, he sent back a letter directing that the Fancher party be allowed to pass through the territory unmolested. Young's letter reportedly arrived on September 13, 1857, two days after the massacre. As governor, Young had promised the federal government he would protect immigrants passing through Utah Territory, but over 120 men, women and children were killed in this incident. There is no debate concerning the involvement of individual Mormons from the surrounding communities by scholars. Only children under the age of seven, who were cared for by local Mormon families, survived, and the murdered members of the wagon train were left unburied. The remains of about 40 people were later found and buried, and Union Army officer James Henry Carleton had a large cross made from local trees, the transverse beam bearing the engraving, "Vengeance Is Mine, Saith The Lord: I Will Repay" and erected a cairn of rocks at the site. A large slab of granite was put up on which he had the following words engraved: "Here 120 men, women and children were massacred in cold blood early in September, 1857. They were from Arkansas." For two years, the monument stood as a memorial to those travelling the Spanish Trail through Mountain Meadow. Some claim that, in 1861, Young brought an entourage to Mountain Meadows and had the cairn and cross destroyed, while exclaiming, "Vengeance is mine and I have taken a little". Before his death in Salt Lake City on August 29, 1877, Young was suffering from cholera morbus and inflammation of the bowels. It is believed that he died of peritonitis from a ruptured appendix. His last words were "Joseph! Joseph! Joseph!", invoking the name of the late Joseph Smith, founder of the Mormon faith. On September 2, 1877, Young's funeral was held in the Tabernacle with an estimated 12,000 to 15,000 people in attendance. He is buried on the grounds of the Mormon Pioneer Memorial Monument in the heart of Salt Lake City. A bronze marker was placed at the grave site June 10, 1938, by members of the Young Men and Young Women organizations, which he founded. A century after his death, one writer stated that He credited Young's leadership with helping to settle much of the American West: Memorials to Young include a bronze statue in front of the Abraham O. Smoot Administration Building, Brigham Young University; a marble statue in the National Statuary Hall Collection at the United States Capitol, donated by the State of Utah in 1950; and a statue atop the "This is the Place Monument" in Salt Lake City. Young's teachings were the 1998–99 course of study in the LDS Church's Sunday Relief Society and Melchizedek priesthood classes. Young was a polygamist, marrying a total of 55 wives, 54 of them after he converted to Mormonism. The policy was difficult for many in the church. Young stated that upon being taught about plural marriage, "It was the first time in my life that I desired the grave." By the time of his death, Young had 56 children by 16 of his wives; 46 of his children reached adulthood. Sources have varied on the number of Young's wives, due to differences in what scholars have considered to be a "wife". There were 55 women who Young was sealed to during his lifetime. While the majority of the sealings were "for eternity", some were "for time only". Researchers believe that not all of the 55 marriages were conjugal. Young did not live with a number of his wives or publicly hold them out as wives, which has led to confusion on the number and their identities. This is in part due to the complexity of how wives were identified in the Mormon society at the time. Of Young's 55 wives, 21 had never been married before; 16 were widows; six were divorced; six had living husbands and the marital status of six others is unknown. In 1856, Young built the Lion House to accommodate his sizable family. This building remains a Salt Lake City landmark, together with the Beehive House, another Young family home. A contemporary of Young wrote: "It was amusing to walk by Brigham Young's big house, a long rambling building with innumerable doors. Each wife has an establishment of her own, consisting of parlor, bedroom, and a front door, the key of which she keeps in her pocket." At the time of Young's death, 19 of his wives had predeceased him; he was divorced from ten, and 23 survived him. The status of four was unknown. One of his wives, Zina Huntington Young, served as the third president of the Relief Society. In his will, Young shared his estate with the 16 surviving wives who had lived with him; the six surviving non-conjugal wives were not mentioned in the will. In 1902, 25 years after his death, "The New York Times" established that Young's direct descendants numbered more than 1,000. Some of Young's descendants have become leaders in the LDS Church. Brigham Young appeared at the end of Le Fil qui chante album, the last Lucky Luke album written by Goscinny. The Scottish poet John Lyon, who was an intimate friend of Young, wrote "Brigham the Bold" in tribute to him after his death. Florence Claxton's graphic novel, "The Adventures of a Woman in Search of Her Rights" (1872), satirizes a would-be emancipated woman whose failure to establish an independent career results in her marriage to Young before she wakes to discover she's been dreaming. Arthur Conan Doyle based his first Sherlock Holmes novel, "A Study in Scarlet", on Mormon history, mentioning Young by name. When asked to comment on the story, which had, "provoked the animosity of the Mormon faithful", Doyle noted, "all I said about the Danite Band and the murders is historical so I cannot withdraw that though it is likely that in a work of fiction it is stated more luridly than in a work of history." Doyle's daughter stated: "You know father would be the first to admit that his first Sherlock Holmes novel was full of errors about the Mormons." Mark Twain devoted a chapter and much of an appendix to Young in "Roughing It". Oliver Wendell Holmes Sr., talking about his fondness of trees, joked in his "The Autocrat of the Breakfast-Table": "I call all trees mine that I have put my wedding-ring on, and I have as many tree-wives as Brigham Young has human ones." Brigham Young was played by Dean Jagger in the 1940 film "Brigham Young". Brigham Young was also played by Terence Stamp in the 2007 film, "September Dawn". Byron Morrow played Young in a cameo appearance in the "Death Valley Days" 1966 episode, "An Organ for Brother Brigham". In the story line, the organ built and guided west to Salt Lake City by Joseph Harris Ridges (1827–1914) of Australia becomes mired in the sand. Wagonmaster Luke Winner (Morgan Woodward) feels compelled to leave the instrument behind until Ridges finds solid rock under the sand. In another "Death Valley Days" episode in 1969, "Biscuits and Billy, the Kid", Michael Hinn (1913–1988) of the former "Boots and Saddles" western series was cast as Young. In the story line, the Tugwell family, Jason (Ben Cooper), Ellie (Emily Banks), and Mary (Erin Moran), are abandoned by their guide while on a wagon train from Utah to California. Gregg Henry depicts Young in the fourth (2014) and fifth (2015) seasons of the TV series "Hell on Wheels", a fictional story about the construction of the First Transcontinental Railroad. As the competing rail lines approach Utah from the east and west coasts, Young supplies Mormon laborers to both railroad companies and negotiates with the railways to have them make Salt Lake City their meeting point. In the Season 5 mid-season finale, "False Prophets", Young's son, Phineas, attempts to murder his father. Persuaded by The Swede, Phineas believed he was the chosen one to go forward to lead the Mormons, instead of his father. Since Young's death, a number of works have published collections of his discourses and sayings.
https://en.wikipedia.org/wiki?curid=5048
Bill Bryson William McGuire Bryson (; born 8 December 1951) is an American-British author of books on travel, the English language, science, and other non-fiction topics. Born in the United States, he has been a resident of Britain for most of his adult life, returning to the United States between 1995 and 2003, and holds dual American and British citizenships. He served as the chancellor of Durham University from 2005 to 2011. Bryson came to prominence in the United Kingdom with the publication of "Notes from a Small Island" (1995), an exploration of Britain, and its accompanying television series. He received widespread recognition again with the publication of "A Short History of Nearly Everything" (2003), a book widely acclaimed for its accessible communication of science. Bryson was born and raised in Des Moines, Iowa, the son of Bill Bryson Sr., a sports journalist who worked for fifty years at the "Des Moines Register", and Agnes Mary (née McGuire), the home furnishings editor at the same newspaper. His mother was of Irish descent. He had an older brother, Michael (1942–2012), and a sister, Mary Jane Elizabeth. In 2006, Bryson published "The Life and Times of the Thunderbolt Kid", a humorous account of his childhood years in Des Moines. Bryson attended Drake University for two years before dropping out in 1972, deciding instead to backpack around Europe for four months. He returned to Europe the following year with a high school friend, Matt Angerer (the pseudonymous Stephen Katz). Bryson wrote about some of his experiences from this trip in his book "". Bryson first visited Britain in 1973 during his tour of Europe and decided to stay after landing a job working in a psychiatric hospital—the now-defunct Holloway Sanatorium in Virginia Water, Surrey. He met a nurse there named Cynthia Billen, whom he married in 1975. They moved to Bryson's hometown of Des Moines, Iowa, in 1975 so that Bryson could complete his college degree at Drake University. In 1977 they settled in Britain. He worked as a journalist, first for the "Bournemouth Evening Echo", eventually becoming chief copy editor of the business section of "The Times" and deputy national news editor of the business section of "The Independent". He has moved around the UK and lived in Virginia Water (Surrey), Purewell (Dorset), Burton (Dorset), Kirkby Malham (North Yorkshire, in the 1980s and '90s), and the Old Rectory in Wramplingham, Norfolk (2003–2013). He currently lives in rural Hampshire and maintains a small flat in South Kensington, London. From 1995 to 2003 he lived in Hanover, New Hampshire. Although able to apply for British citizenship, Bryson said in 2010 that he had declined a citizenship test, declaring himself "too cowardly" to take it. However, in 2014, he said that he was preparing to take it and in the prologue to his 2015 book "" he describes doing so, in Eastleigh. His citizenship ceremony took place in Winchester and he now holds dual citizenship. While living in the US in the 1990s Bryson wrote a column for a British newspaper for several years, reflecting on humorous aspects of his repatriation in the United States. These columns were selected and adapted to become his book "I'm a Stranger Here Myself", alternatively titled "Notes from a Big Country" in Britain, Canada, and Australia. During his time in the United States, Bryson decided to walk the Appalachian Trail with his friend Stephen Katz (a pseudonym), about which he wrote the book "A Walk in the Woods". In the 2015 film adaptation of "A Walk in the Woods", Bryson is portrayed by Academy Award winner Robert Redford and Katz is portrayed by Nick Nolte (Bryson is portrayed as being much older than he was at the time of his actual walk). In 2003, in conjunction with World Book Day, British voters chose Bryson's book "Notes from a Small Island" as that which best sums up British identity and the state of the nation. In the same year, he was appointed a Commissioner for English Heritage. His popular science book, "A Short History of Nearly Everything" is 500 pages long and explores not only the histories and current statuses of the sciences, but also reveals their humble and often humorous beginnings. Although one "top scientist" is alleged to have jokingly described the book as "annoyingly free of mistakes", Bryson himself makes no such claim and a list of some reported errors in the book is available online. In November 2006, Bryson interviewed the then British prime minister, Tony Blair, on the state of science and education. Bryson has also written two popular works on the history of the English language—"The Mother Tongue" and "Made in America"—and, more recently, an update of his guide to usage, "Bryson's Dictionary of Troublesome Words" (published in its first edition as "The Penguin Dictionary of Troublesome Words" in 1983). In 2012 Bryson sued his agent, Jed Mattes Inc., in New York County Supreme Court, claiming it had "failed to perform some of the most fundamental duties of an agent". The case was settled out of court, with part of the settlement being that Bryson may not discuss it. In 2013 Bryson claimed copyright on an interview he had given nearly 20 years previously, after the interviewer republished it as an 8000-word e-book. Amazon removed the e-book from publication, but the claim was controversial as interviews are generally considered to be the creative work of the interviewer. In 2005 Bryson was appointed chancellor of Durham University, succeeding the late Sir Peter Ustinov, and became more active with student activities than is common for holders of that post, even appearing in a Durham student film and promoting litter picks in the city. He had praised Durham as "a perfect little city" in "Notes from a Small Island". In October 2010, it was announced that Bryson would step down at the end of 2011. In May 2007, he became the president of the Campaign to Protect Rural England. His first area of focus in this role was the establishment of an anti-littering campaign across England. He discussed the future of the countryside with Richard Mabey, Sue Clifford, Nicholas Crane and Richard Girling at CPRE's Volunteer Conference in November 2007. Bryson has received numerous awards for his ability to communicate science with passion and enthusiasm. In 2004, he won the prestigious Aventis Prize for best general science book that year, with "A Short History of Nearly Everything". In 2005, the book won the EU Descartes Prize for science communication. In 2005 he received the President's Award from the Royal Society of Chemistry for advancing the cause of the chemical sciences. In 2007, he won the Bradford Washburn Award from the Museum of Science in Boston, MA for contributions to the popularization of science. In 2012, he received the Kenneth B. Myer Award from the Florey Institute of Neuroscience in Melbourne, Australia. With the Royal Society of Chemistry the Bill Bryson prize for Science Communication was established in 2005. The competition engages students from around the world in explaining science to non-experts. He was awarded an honorary Officer of the Order of the British Empire (OBE) for his contribution to literature on 13 December 2006. The following year, he was awarded the James Joyce Award by the Literary and Historical Society of University College Dublin. After he received British citizenship his OBE was made substantive. In 2011 he won the Golden Eagle Award from the Outdoor Writers and Photographers Guild. On 22 November 2012, Durham University officially renamed the Main Library the Bill Bryson Library for his contributions as the university's 11th chancellor (2005–11). The library also has a cafe named after Bryson's book Notes from a Small Island. Bryson was elected an Honorary Fellow of the Royal Society (FRS) in 2013, becoming the first non-Briton upon whom this honour has been conferred. His biography at the Society reads: Bill Bryson is a popular author who is driven by a deep curiosity for the world we live in. Bill's books and lectures demonstrate an abiding love for science and an appreciation for its social importance. His international bestseller, "A Short History of Nearly Everything" (2003), is widely acclaimed for its accessible communication of science and has since been adapted for children. In 2006 Frank Cownie, the mayor of Des Moines, awarded Bryson the key to the city and announced that 21 October 2006 would be known as "Bill Bryson, The Thunderbolt Kid, Day". In January 2007, he was the Schwartz Visiting Fellow at the Pomfret School in Connecticut. Bryson has written the following books:
https://en.wikipedia.org/wiki?curid=5050
Big Audio Dynamite Big Audio Dynamite (later known as Big Audio Dynamite II and Big Audio, and often abbreviated BAD) are an English band formed in London in 1984 by Mick Jones, the former lead guitarist of the Clash, who has been their only constant member. The band mixed various musical styles, incorporating elements of punk rock, dance music, hip hop, reggae, and funk. After releasing a number of well-received albums and touring extensively throughout the 1980s and 1990s, Big Audio Dynamite broke up in 1997. In 2011, the band embarked on a reunion tour. After being fired from the Clash in 1983 and following a brief stint with the band General Public, Mick Jones formed a new band called Top Risk Action Company (T.R.A.C.). He recruited bassist Leo "E-Zee Kill" Williams, saxophonist John "Boy" Lennard (from Theatre of Hate), and former Clash drummer Nicky "Topper" Headon. Headon was quickly sacked for his heroin addiction and Lennard either left or was fired and the band folded. Although the band released no material (only demos were recorded which have yet to be officially released), T.R.A.C. can be seen as a forerunner to Big Audio Dynamite in much the same way London SS can be seen as an early incarnation of the Clash. Jones then formed Big Audio Dynamite with film director Don Letts (maker of "The Punk Rock Movie", various Clash music videos, and later the Clash documentary ""), bassist Leo Williams (from T.R.A.C.), drummer Greg Roberts, and keyboardist Dan Donovan. In 1985 the band's debut, "This Is Big Audio Dynamite," was released. The album's cover shows the band as a four piece, minus Donovan who took and designed the photo. In 2016 "This is Big Audio Dynamite" was reissued on vinyl, with the album being mastered using analog tapes and pressed on 180-gram vinyl. 1986's "No. 10, Upping St." reunited Jones for one last album with former Clash lyricist and lead vocalist Joe Strummer, who was credited with co-producing the album and co-writing of 5 of its 9 tracks. The cover painting, based on a still taken from the Brian De Palma film "Scarface," was painted by Tim Jones. BAD supported U2 on their 1987 world tour, then released 1988's "Tighten Up Vol. 88" and 1989's "Megatop Phoenix". "Tighten Up, Vol. 88" contained "Just Play Music!", which was the second No. 1 single on Billboard's Modern Rock Tracks chart. The band also recorded an unreleased track called "Keep off the Grass" which was a rock-style instrumental of the theme to the classic western film, "The Magnificent Seven". A promo video can be seen on YouTube. In 1990, the original line-up wrote and recorded the song "Free" for the soundtrack to the adventure comedy film "Flashback". This would be the final song written with the original line-up, as the band would break-up shortly after. "The Bottom Line" from the band's first album was remixed and used as the title track for "Flashback". However, this track was not included on the soundtrack. It can be found on the 12" or by download. Later in 1990, Mick Jones debuted Big Audio Dynamite II and release the UK only album "Kool-Aid". Dan Donovan remained in BAD II for one song, a re-working of the final BAD track "Free" renamed "Kickin' In". For 1991's "The Globe", only Jones remained from the original incarnation of Big Audio Dynamite, and the band was now called "Big Audio Dynamite II". This new line-up featured two guitarists. "The Globe" featured the band's most commercially successful single, "Rush," which hit No. 1 on both the US Modern Rock Tracks chart and the Australian National ARIA Chart. "Innocent Child" and "The Globe" were also released as singles. BAD supported U2 on their Zoo TV Tour, headlined the MTV 120 Minutes tour which also featured Public Image Ltd, Live, and Blind Melon, and released the live EP "On the Road Live '92". In 1991, while Mick Jones formed Big Audio Dynamite II, the rest of the original lineup briefly formed a band called Screaming Target. They released one album "Hometown Hi Fi" and two singles "Who Killed King Tubby?" and "Knowledge N Numbers" before disbanding. The title and album cover were purposely meant as a tribute to Big Youth's reggae album Screaming Target. In 1993, Greg Roberts formed the electronic band Dreadzone with Tim Bran, with the name suggested to them by Don Letts. Leo Williams and Dan Donovan joined the band before their second album "Second Light" and the single "Little Britain" in 1995. Dreadzone is still active, with Roberts and Williams remaining members. The band later recruited keyboardist Andre Shapps (co-producer of "The Globe" and Mick Jones's cousin) and DJ Michael " Lord Zonka" Custance as DJ and vocalist. Both appeared on the band's 1994 album "Higher Power", which was released under the shortened name "Big Audio". After signing a recording contract with Gary Kurfirst's Radioactive Records in 1995, the band reverted to the original "Big Audio Dynamite" moniker and released their least successful album to date, "F-Punk". Radioactive Records refused to release the next proposed BAD album, "Entering a New Ride". The line-up contained MC vocals by Joe Attard of Punks Jump Up, Ranking Roger of the Beat and General Public and drummer Bob Wond of Under Two Flags. In 1998, the band launched a new website, primarily intended as a means to distribute songs from the "Entering a New Ride" album. In 2001, after having only released 6 songs from the album, the website went down and Big Audio Dynamite disbanded. Their final album was never properly released in its entirety, but it has been heavily leaked online for fans who wished to hear it. Since 2005, Jones has been working on a project with Tony James (ex-member of Generation X and Sigue Sigue Sputnik) called Carbon/Silicon. In early 2007, a BAD II live 8 song DVD was released, titled Big Audio Dynamite Live: E=MC². In 2010, Don Letts revealed to Billboard.com that he and Mick Jones broached the idea of a Big Audio Dynamite reunion in 2011. He explained, "I could lie to you and say 'Not in a million years,' but... if Mick wasn't tied up with Gorillaz it might happen this year. (Jones) has looked at me and said, 'Maybe next year,' but who knows. I've got to admit that in the past I'm not a great one for reformations; I always think if you're lucky in life, you get a window of opportunity, use it to the best of your ability and then fuck off and let someone else have their turn. But here I am 25 years down the line considering the thing." Besides a Big Audio Dynamite reunion, Letts said he was also hopeful for more Legacy Editions of the band's albums after finding more unreleased material—including live recordings—in the vaults. "There's definitely more stuff; whether Sony thinks it's worthwhile, that's another matter. But there seems to be a lot of respect for Big Audio Dynamite. Time has shown that a lot of the things we were dabbling in back then have come to manifest themselves today...so hopefully we'll get to do some more." The reformation of the original line-up of BAD was confirmed on 25 January 2011 with the announcement of a UK tour. The 9-date tour was a commercial and critical success. The first of their two sold out Shepherd's Bush Empire shows received a 4 star review in "The Times" ('Not just a reformation - this is "their" time'), "The Observer" welcomed BAD's return with a glowing review declaring, 'they remain a joy'. "News of the World" awarded their Manchester Academy show a 5 star review and proclaimed, 'Easily the reformation of the year'. Their headline slot at Beautiful Days festival was favourably reviewed on the Louder Than War music website. Big Audio Dynamite played sets at the 2011 Outside Lands Music and Arts Festival, Coachella Valley Music and Arts Festival, Glastonbury Festival 2011, and Lollapalooza. Big Audio Dynamite (1984–1990, 2011) Big Audio Dynamite II (1990–1993) Big Audio (1994–1995) Big Audio Dynamite (1996–1998)
https://en.wikipedia.org/wiki?curid=5051
Bentley Bentley Motors Limited () is a British manufacturer and marketer of luxury cars and SUVs—and a subsidiary of the Volkswagen Group since 1998. Headquartered in Crewe, England, the company was founded as Bentley Motors Limited by W. O. Bentley in 1919 in Cricklewood, North London—and became widely known for winning the 24 Hours of Le Mans in 1924, 1927, 1928, 1929 and 1930. Prominent models extend from the historic sports-racing Bentley 4½ Litre and Bentley Speed Six; the more recent Bentley R Type Continental, Bentley Turbo R, and Bentley Arnage; to its current model line, including the Flying Spur, Continental GT, Bentayga and the Mulsanne—which are marketed worldwide, with China as its largest market as of November 2012. Today most Bentley models are assembled at the company's Crewe factory, with a small number assembled at Volkswagen's Dresden factory, Germany, and with bodies for the Continental manufactured in Zwickau and for the Bentayga manufactured at the Volkswagen Bratislava Plant. The joining and eventual separation of Bentley and Rolls-Royce followed a series of mergers and acquisitions, beginning with the 1931 purchase by Rolls-Royce of Bentley, then in receivership. In 1971, Rolls-Royce itself was forced into receivership and the UK government nationalised the company—splitting into two companies the aerospace division (Rolls-Royce Plc) and automotive (Rolls-Royce Motors Limited) divisions—the latter retaining the Bentley subdivision. Rolls-Royce Motors was subsequently sold to engineering conglomerate, Vickers and in 1998, Vickers sold Rolls-Royce to Volkswagen AG. Intellectual property rights to both the name "Rolls-Royce" as well as the company's logo had been retained not by Rolls-Royce Motors, but by aerospace company, Rolls-Royce Plc, which had continued to license both to the automotive division. Thus the sale of "Rolls-Royce" to VW included the Bentley name and logos, vehicle designs, model nameplates, production, and administrative facilities, the Spirit of Ecstasy and Rolls-Royce grille shape trademarks (subsequently sold to BMW by VW)—but not the rights to the Rolls-Royce name or logo. The aerospace company, Rolls-Royce Plc, ultimately sold both to BMW AG. Before World War I, Walter Owen Bentley and his brother, Horace Millner Bentley, sold French DFP cars in Cricklewood, North London, but W.O, as Walter was known, always wanted to design and build his own cars. At the DFP factory, in 1913, he noticed an aluminium paperweight and thought that aluminium might be a suitable replacement for cast iron to fabricate lighter pistons. The first Bentley aluminium pistons were fitted to Sopwith Camel aero engines during World War I. In August 1919, W.O. registered Bentley Motors Ltd. and in October he exhibited a car chassis, with dummy engine, at the London Motor Show. Ex–Royal Flying Corps officer Clive Gallop designed an innovative four valves per cylinder engine for the chassis. By December the engine was built and running. Delivery of the first cars was scheduled for June 1920, but development took longer than estimated so the date was extended to September 1921. The durability of the first Bentley cars earned widespread acclaim and they competed in hill climbs and raced at Brooklands. Bentley's first major event was the 1922 Indianapolis 500, a race dominated by specialized cars with Duesenberg racing chassis. They entered a modified road car driven by works driver, Douglas Hawkes, accompanied by riding mechanic, H. S. "Bertie" Browning. Hawkes completed the full and finished 13th with an average speed of after starting in 19th position. The team was then rushed back to England to compete in the 1922 RAC Tourist Trophy. In an ironic reference to his heavyweight boxer's stature, Captain Woolf Barnato was nicknamed "Babe". In 1925, he acquired his first Bentley, a 3-litre. With this car he won numerous Brooklands races. Just a year later he acquired the Bentley business itself. The Bentley enterprise was always underfunded, but inspired by the 1924 Le Mans win by John Duff and Frank Clement, Barnato agreed to finance Bentley's business. Barnato had incorporated Baromans Ltd in 1922, which existed as his finance and investment vehicle. Via Baromans, Barnato initially invested in excess of £100,000, saving the business and its workforce. A financial reorganisation of the original Bentley company was carried out and all existing creditors paid off for £75,000. Existing shares were devalued from £1 each to just 1 shilling, or 5% or their original value. Barnato held 149,500 of the new shares giving him control of the company and he became chairman. Barnato injected further cash into the business: £35,000 secured by debenture in July 1927; £40,000 in 1928; £25,000 in 1929. With renewed financial input, W. O. Bentley was able to design another generation of cars. The Bentley Boys were a group of British motoring enthusiasts that included Barnato, Sir Henry "Tim" Birkin, steeple chaser George Duller, aviator Glen Kidston, automotive journalist S.C.H. "Sammy" Davis, and Dudley Benjafield. The Bentley Boys favoured Bentley cars. Many were independently wealthy and many had a military background. They kept the marque's reputation for high performance alive; Bentley was noted for its four consecutive victories at the 24 Hours of Le Mans, from 1927 to 1930. In 1929, Birkin developed the 4½-litre, lightweight Blower Bentley at Welwyn Garden City and produced five racing specials, starting with Bentley Blower No.1 which was optimised for the Brooklands racing circuit. Birkin overruled Bentley and put the model on the market before it was fully developed. As a result, it was unreliable. In March 1930, during the Blue Train Races, Barnato raised the stakes on Rover and its Rover Light Six, having raced and beaten "Le Train Bleu" for the first time, to better that record with his 6½-litre Bentley Speed Six on a bet of £100. He drove against the train from Cannes to Calais, then by ferry to Dover, and finally London, travelling on public highways, and won. Barnato drove his H.J. Mulliner–bodied formal saloon in the race against the Blue Train. Two months later, on 21 May 1930, he took delivery of a Speed Six with streamlined fastback "sportsman coupé" by Gurney Nutting. Both cars became known as the "Blue Train Bentleys"; the latter is regularly mistaken for, or erroneously referred to as being, the car that raced the Blue Train, while in fact Barnato named it in memory of his race. A painting by Terence Cuneo depicts the Gurney Nutting coupé racing along a road parallel to the Blue Train, which scenario never occurred as the road and railway did not follow the same route. The original model was the three-litre, but as customers put heavier bodies on the chassis, a larger 4½-litre model followed. Perhaps the most iconic model of the period is the 4½-litre "Blower Bentley", with its distinctive supercharger projecting forward from the bottom of the grille. Uncharacteristically fragile for a Bentley it was not the racing workhorse the 6½-litre was, though in 1930 Birkin remarkably finished second in the French Grand Prix at Pau in a stripped-down racing version of the Blower Bentley, behind Philippe Etancelin in a Bugatti Type 35. The 4½-litre model later became famous in popular media as the vehicle of choice of James Bond in the original novels, but this has been seen only briefly in the films. John Steed in the television series "The Avengers" also drove a Bentley. The new eight-litre was such a success that when Barnato's money seemed to run out in 1931 and Napier was planning to buy Bentley's business, Rolls-Royce purchased Bentley Motors to prevent it from competing with their most expensive model, the Phantom II. Bentley withdrew from motor racing just after winning at Le Mans in 1930, claiming that they had learned enough about speed and reliability. The Wall Street Crash of 1929 and the resulting Great Depression throttled the demand for Bentley's expensive motor cars. In July 1931, two mortgage payments were due which neither the company nor Barnato, the guarantor, were able to meet. On 10 July 1931 a receiver was appointed. Napier offered to buy Bentley with the purchase to be final in November 1931. Instead, British Central Equitable Trust made a winning sealed bid of £125,000. British Central Equitable Trust later proved to be a front for Rolls-Royce Limited. Not even Bentley himself knew the identity of the purchaser until the deal was completed. Barnato received £42,000 for his shares in Bentley Motors. In 1934 he was appointed to the board of the new Bentley Motors (1931) Ltd. In the same year Bentley confirmed that it would continue racing. Rolls-Royce took over the assets of Bentley Motors (1919) Ltd and formed a subsidiary, Bentley Motors (1931) Ltd. Rolls-Royce had acquired the Bentley showrooms in Cork Street, the service station at Kingsbury, the complex at Cricklewood and the services of Bentley himself. This last was disputed by Napier in court without success. Bentley had neglected to register their trademark so Rolls-Royce immediately did so. They also sold the Cricklewood factory in 1932. Production stopped for two years, before resuming at the Rolls-Royce works in Derby. Unhappy with his role at Rolls-Royce, when his contract expired at the end of April 1935 W. O. Bentley left to join Lagonda. When the new Bentley 3½ litre appeared in 1933, it was a sporting variant of the Rolls-Royce 20/25, which disappointed some traditional customers yet was well received by many others. W. O. Bentley was reported as saying, "Taking all things into consideration, I would rather own this Bentley than any other car produced under that name". Rolls-Royce's advertisements for the  Litre called it "the silent sports car", a slogan Rolls-Royce continued to use for Bentley cars until the 1950s. All Bentleys produced from 1931 to 2004 used inherited or shared Rolls-Royce chassis, and adapted Rolls-Royce engines, and are described by critics as badge-engineered Rolls-Royces. In preparation for war, Rolls-Royce and the British Government searched for a location for a shadow factory to ensure production of aero-engines. Crewe, with its excellent road and rail links, as well as being located in the northwest away from the aerial bombing starting in mainland Europe, was a logical choice. Crewe also had extensive open farming land. Construction of the factory started on a 60-acre area on the potato fields of Merrill's Farm in July 1938, with the first Rolls-Royce Merlin aero-engine rolling off the production line five months later. 25,000 Merlin engines were produced and at its peak, in 1943 during World War II, the factory employed 10,000 people. With the war in Europe over and the general move towards the then new jet engines, Rolls-Royce concentrated its aero engine operations at Derby and moved motor car operations to Crewe. Until some time after World War II, most high-end motorcar manufacturers like Bentley and Rolls-Royce did not supply complete cars. They sold rolling chassis, near-complete from the instrument panel forward. Each chassis was delivered to the coach builder of the buyer's choice. The biggest specialist car dealerships had coachbuilders build standard designs for them which were held in stock awaiting potential buyers. To meet post-war demand, particularly UK Government pressure to export and earn overseas currency, Rolls-Royce developed an all steel body using pressings made by Pressed Steel to create a "standard" ready-to-drive complete saloon car. The first steel-bodied model produced was the Bentley Mark VI: these started to emerge from the newly reconfigured Crewe factory early in 1946. Some years later, initially only for export, the Rolls-Royce Silver Dawn was introduced, a standard steel Bentley but with a Rolls-Royce radiator grille for a small extra charge, and this convention continued. Chassis remained available to coachbuilders until the end of production of the Bentley S3, which was replaced for October 1965 by the chassis-less monocoque construction T series. The Continental fastback coupé was aimed at the UK market, most cars, 164 plus a prototype, being right-hand drive. The chassis was produced at the Crewe factory and shared many components with the standard R type. Other than the R-Type standard steel saloon, R-Type Continentals were delivered as rolling chassis to the coachbuilder of choice. Coachwork for most of these cars was completed by H. J. Mulliner & Co. who mainly built them in fastback coupe form. Other coachwork came from Park Ward (London) who built six, later including a drophead coupe version. Franay (Paris) built five, Graber (Wichtrach, Switzerland) built three, one of them later altered by Köng (Basel, Switzerland), and Pininfarina made one. James Young (London) built in 1954 a Sports Saloon for the owner of James Young's, James Barclay. The early R Type Continental has essentially the same engine as the standard R Type, but with modified carburation, induction and exhaust manifolds along with higher gear ratios. After July 1954 the car was fitted with an engine, having now a larger bore of 94.62 mm (3.7 in) with a total displacement of . The compression ratio was raised to 7.25:1. The problems of Bentley's owner with Rolls-Royce aero engine development, the RB211, brought about the financial collapse of its business in 1970. The motorcar division was made a separate business, Rolls-Royce Motors Limited, which remained independent until bought by Vickers plc in August 1980. By the 1970s and early 1980s Bentley sales had fallen badly; at one point less than 5% of combined production carried the Bentley badge. Under Vickers, Bentley set about regaining its high-performance heritage, typified by the 1980 Mulsanne. Bentley's restored sporting image created a renewed interest in the name and Bentley sales as a proportion of output began to rise. By 1986 the Bentley:Rolls-Royce ratio had reached 40:60; by 1991 it achieved parity. In October 1997, Vickers announced that it had decided to sell Rolls-Royce Motors. BMW AG seemed to be a logical purchaser because BMW already supplied engines and other components for Bentley and Rolls-Royce branded cars and because of BMW and Vickers joint efforts in building aircraft engines. BMW made a final offer of £340m, but was outbid by Volkswagen AG, which offered £430m. Volkswagen AG acquired the vehicle designs, model nameplates, production and administrative facilities, the Spirit of Ecstasy and Rolls-Royce grille shape trademarks, but not the rights to the use of the Rolls-Royce name or logo, which are owned by Rolls-Royce Holdings plc. In 1998, BMW started supplying components for the new range of Rolls-Royce and Bentley cars—notably V8 engines for the Bentley Arnage and V12 engines for the Rolls-Royce Silver Seraph, however, the supply contract allowed BMW to terminate its supply deal with Rolls-Royce with 12 months' notice, which would not be enough time for Volkswagen to re-engineer the cars. BMW paid Rolls-Royce plc £40m to license the Rolls-Royce name and logo. After negotiations, BMW and Volkswagen AG agreed that, from 1998 to 2002, BMW would continue to supply engines and components and would allow Volkswagen temporary use of the Rolls-Royce name and logo. All BMW engine supply ended in 2003 with the end of Silver Seraph production. From 1 January 2003 forward, Volkswagen AG would be the sole provider of cars with the "Bentley" marque. BMW established a new legal entity, Rolls-Royce Motor Cars Limited, and built a new administrative headquarters and production facility for Rolls-Royce branded vehicles in Goodwood, West Sussex, England. After acquiring the business, Volkswagen spent GBP500 million (about US$845 million) to modernise the Crewe factory and increase production capacity. As of early 2010, there are about 3,500 working at Crewe, compared with about 1,500 in 1998 before being taken over by Volkswagen. It was reported that Volkswagen invested a total of nearly US$2 billion in Bentley and its revival. As a result of upgrading facilities at Crewe the bodywork now arrives fully painted at the Crewe facility for final assembly, with the parts coming from Germany—similarly Rolls-Royce body shells are painted and shipped to the UK for assembly only. Demand had been so great that the factory at Crewe was unable to meet orders despite an installed capacity of approximately 9,500 vehicles per year; there was a waiting list of over a year for new cars to be delivered. Consequently, part of the production of the new Flying Spur, a four-door version of the Continental GT, was assigned to the Transparent Factory (Germany), where the Volkswagen Phaeton luxury car was also assembled. This arrangement ceased at the end of 2006 after around 1,000 cars, with all car production reverting to the Crewe plant. In 2002, Bentley presented Queen Elizabeth II with an official State Limousine to celebrate her Golden Jubilee. In 2003, Bentley's two-door convertible, the Bentley Azure, ceased production, and Bentley introduced a second line, Bentley Continental GT, a large luxury coupé powered by a W12 engine built in Crewe. In April 2005, Bentley confirmed plans to produce a four-seat convertible model—the Azure, derived from the Arnage Drophead Coupé prototype—at Crewe beginning in 2006. By the autumn of 2005, the convertible version of the successful Continental GT, the Continental GTC, was also presented. These two models were successfully launched in late 2006. A limited run of a Zagato modified GT was also announced in March 2008, dubbed "GTZ". A new version of the Bentley Continental was introduced at the 2009 Geneva Motor Show: The Continental Supersports. This new Bentley is a supercar combining extreme power with environmentally friendly FlexFuel technology, capable of using petrol (gasoline) and biofuel (E85 ethanol). Bentley sales continued to increase, and in 2005 8,627 were sold worldwide, 3,654 in the United States. In 2007, the 10,000 cars-per-year threshold was broken for the first time with sales of 10,014. For 2007, a record profit of €155 million was also announced. Bentley reported a sale of about 7,600 units in 2008. However, its global sales plunged 50 percent to 4,616 vehicles in 2009 (with the U.S. deliveries dropped 49% to 1,433 vehicles) and it suffered an operating loss of €194 million, compared with an operating profit of €10 million in 2008. As a result of the slump in sales, production at Crewe was shut down during March and April 2009. Though vehicle sales increased by 11% to 5,117 in 2010, operating loss grew by 26% to €245 million. In Autumn 2010, workers at Crewe staged a series of protests over proposal of compulsory work on Fridays and mandatory overtime during the week. Vehicle sales in 2011 rose 37% to 7,003 vehicles, with the new Continental GT accounting for over one-third of total sales. The current workforce is about 4,000 people. The business earned a profit in 2011 after two years of losses as a result of the following sales results: In June 2020, Bentley announced that it will cut around 1,000 (one quarter of 4,200) job places in the UK as a result of the COVID-19 pandemic. Sources Volkswagen AG Annual Reports and press releases Sources Volkswagen AG Annual Reports Unsold cars: During the years 2011 and 2012 production exceeded deliveries by 1,187 cars which is estimated to have trebled inventory. A Bentley Continental GT3 entered by the M-Sport factory team won the Silverstone round of the 2014 Blancpain Endurance Series. This was Bentley's first official entry in a British race since the 1930 RAC Tourist Trophy.
https://en.wikipedia.org/wiki?curid=5052
Chordate A chordate () is an animal of the phylum Chordata. During some period of their life cycle, chordates possess a notochord, a dorsal nerve cord, pharyngeal slits, an endostyle, and a post-anal tail: these five anatomical features define this phylum. Chordates are also bilaterally symmetric, and have a coelom, metameric segmentation, and circulatory system. The Chordata and Ambulacraria together form the superphylum Deuterostomia. Chordates are divided into three subphyla: Vertebrata (fish, amphibians, reptiles, birds, and mammals); Tunicata or Urochordata (sea squirts, salps); and Cephalochordata (which includes lancelets). There are also extinct taxa such as the Vetulicolia. Hemichordata (which includes the acorn worms) has been presented as a fourth chordate subphylum, but now is treated as a separate phylum: hemichordates and Echinodermata form the Ambulacraria, the sister phylum of the Chordates. Of the more than 65,000 living species of chordates, about half are bony fish that are members of the superclass Pisces, class Osteichthyes. Chordate fossils have been found from as early as the Cambrian explosion, 541 million years ago. Cladistically (phylogenetically), vertebrates – chordates with the notochord replaced by a vertebral column during development – are considered to be a subgroup of the clade Craniata, which consists of chordates with a skull. The Craniata and Tunicata compose the clade Olfactores. (See diagram under Phylogeny.) Chordates form a phylum of animals that are defined by having at some stage in their lives all of the following anatomical features: There are soft constraints that separate chordates from certain other biological lineages, but are not part of the formal definition: The following schema is from the fourth edition of "Vertebrate Palaeontology". The invertebrate chordate classes are from "Fishes of the World". While it is structured so as to reflect evolutionary relationships (similar to a cladogram), it also retains the traditional ranks used in Linnaean taxonomy. Cephalochordates, one of the three subdivisions of chordates, are small, "vaguely fish-shaped" animals that lack brains, clearly defined heads and specialized sense organs. These burrowing filter-feeders compose the earliest-branching chordate sub-phylum. Most tunicates appear as adults in two major forms, known as "sea squirts" and salps, both of which are soft-bodied filter-feeders that lack the standard features of chordates. Sea squirts are sessile and consist mainly of water pumps and filter-feeding apparatus; salps float in mid-water, feeding on plankton, and have a two-generation cycle in which one generation is solitary and the next forms chain-like colonies. However, all tunicate larvae have the standard chordate features, including long, tadpole-like tails; they also have rudimentary brains, light sensors and tilt sensors. The third main group of tunicates, Appendicularia (also known as Larvacea), retain tadpole-like shapes and active swimming all their lives, and were for a long time regarded as larvae of sea squirts or salps. The etymology of the term Urochordata (Balfour 1881) is from the ancient Greek οὐρά (oura, "tail") + Latin chorda ("cord"), because the notochord is only found in the tail. The term Tunicata (Lamarck 1816) is recognised as having precedence and is now more commonly used. Craniates all have distinct skulls. They include the hagfish, which have no vertebrae. Michael J. Benton commented that "craniates are characterized by their heads, just as chordates, or possibly all deuterostomes, are by their tails". Most craniates are vertebrates, in which the notochord is replaced by the vertebral column. These consist of a series of bony or cartilaginous cylindrical vertebrae, generally with neural arches that protect the spinal cord, and with projections that link the vertebrae. However hagfish have incomplete braincases and no vertebrae, and are therefore not regarded as vertebrates, but as members of the craniates, the group from which vertebrates are thought to have evolved. However the cladistic exclusion of hagfish from the vertebrates is controversial, as they may be degenerate vertebrates who have lost their vertebral columns. The position of lampreys is ambiguous. They have complete braincases and rudimentary vertebrae, and therefore may be regarded as vertebrates and true fish. However, molecular phylogenetics, which uses biochemical features to classify organisms, has produced both results that group them with vertebrates and others that group them with hagfish. If lampreys are more closely related to the hagfish than the other vertebrates, this would suggest that they form a clade, which has been named the Cyclostomata. There is still much ongoing differential (DNA sequence based) comparison research that is trying to separate out the simplest forms of chordates. As some lineages of the 90% of species that lack a backbone or notochord might have lost these structures over time, this complicates the classification of chordates. Some chordate lineages may only be found by DNA analysis, when there is no physical trace of any chordate-like structures. Attempts to work out the evolutionary relationships of the chordates have produced several hypotheses. The current consensus is that chordates are monophyletic, meaning that the Chordata include all and only the descendants of a single common ancestor, which is itself a chordate, and that craniates' nearest relatives are tunicates. All of the earliest chordate fossils have been found in the Early Cambrian Chengjiang fauna, and include two species that are regarded as fish, which implies that they are vertebrates. Because the fossil record of early chordates is poor, only molecular phylogenetics offers a reasonable prospect of dating their emergence. However, the use of molecular phylogenetics for dating evolutionary transitions is controversial. It has also proved difficult to produce a detailed classification within the living chordates. Attempts to produce evolutionary "family trees" shows that many of the traditional classes are paraphyletic. Diagram of the family tree of chordates While this has been well known since the 19th century, an insistence on only monophyletic taxa has resulted in vertebrate classification being in a state of flux. The majority of animals more complex than jellyfish and other Cnidarians are split into two groups, the protostomes and deuterostomes, the latter of which contains chordates. It seems very likely the "Kimberella" was a member of the protostomes. If so, this means the protostome and deuterostome lineages must have split some time before "Kimberella" appeared—at least , and hence well before the start of the Cambrian . The Ediacaran fossil "Ernietta", from about , may represent a deuterostome animal. Fossils of one major deuterostome group, the echinoderms (whose modern members include starfish, sea urchins and crinoids), are quite common from the start of the Cambrian, . The Mid Cambrian fossil "Rhabdotubus johanssoni" has been interpreted as a pterobranch hemichordate. Opinions differ about whether the Chengjiang fauna fossil "Yunnanozoon", from the earlier Cambrian, was a hemichordate or chordate. Another fossil, "Haikouella lanceolata", also from the Chengjiang fauna, is interpreted as a chordate and possibly a craniate, as it shows signs of a heart, arteries, gill filaments, a tail, a neural chord with a brain at the front end, and possibly eyes—although it also had short tentacles round its mouth. "Haikouichthys" and "Myllokunmingia", also from the Chengjiang fauna, are regarded as fish. "Pikaia", discovered much earlier (1911) but from the Mid Cambrian Burgess Shale (505 Ma), is also regarded as a primitive chordate. On the other hand, fossils of early chordates are very rare, since invertebrate chordates have no bones or teeth, and only one has been reported for the rest of the Cambrian. The evolutionary relationships between the chordate groups and between chordates as a whole and their closest deuterostome relatives have been debated since 1890. Studies based on anatomical, embryological, and paleontological data have produced different "family trees". Some closely linked chordates and hemichordates, but that idea is now rejected. Combining such analyses with data from a small set of ribosome RNA genes eliminated some older ideas, but opened up the possibility that tunicates (urochordates) are "basal deuterostomes", surviving members of the group from which echinoderms, hemichordates and chordates evolved. Some researchers believe that, within the chordates, craniates are most closely related to cephalochordates, but there are also reasons for regarding tunicates (urochordates) as craniates' closest relatives. Since early chordates have left a poor fossil record, attempts have been made to calculate the key dates in their evolution by molecular phylogenetics techniques—by analyzing biochemical differences, mainly in RNA. One such study suggested that deuterostomes arose before and the earliest chordates around . However, molecular estimates of dates often disagree with each other and with the fossil record, and their assumption that the molecular clock runs at a known constant rate has been challenged. Traditionally, Cephalochordata and Craniata were grouped into the proposed clade "Euchordata", which would have been the sister group to Tunicata/Urochordata. More recently, Cephalochordata has been thought of as a sister group to the "Olfactores", which includes the craniates and tunicates. The matter is not yet settled. Phylogenetic tree of the Chordate phylum. Lines show probable evolutionary relationships, including extinct taxa, which are denoted with a dagger, †. Some are invertebrates. The positions (relationships) of the Lancelet, Tunicate, and Craniata clades are as reported Hemichordates ("half chordates") have some features similar to those of chordates: branchial openings that open into the pharynx and look rather like gill slits; stomochords, similar in composition to notochords, but running in a circle round the "collar", which is ahead of the mouth; and a dorsal nerve cord—but also a smaller ventral nerve cord. There are two living groups of hemichordates. The solitary enteropneusts, commonly known as "acorn worms", have long proboscises and worm-like bodies with up to 200 branchial slits, are up to long, and burrow though seafloor sediments. Pterobranchs are colonial animals, often less than long individually, whose dwellings are interconnected. Each filter feeds by means of a pair of branched tentacles, and has a short, shield-shaped proboscis. The extinct graptolites, colonial animals whose fossils look like tiny hacksaw blades, lived in tubes similar to those of pterobranchs. Echinoderms differ from chordates and their other relatives in three conspicuous ways: they possess bilateral symmetry only as larvae - in adulthood they have radial symmetry, meaning that their body pattern is shaped like a wheel; they have tube feet; and their bodies are supported by skeletons made of calcite, a material not used by chordates. Their hard, calcified shells keep their bodies well protected from the environment, and these skeletons enclose their bodies, but are also covered by thin skins. The feet are powered by another unique feature of echinoderms, a water vascular system of canals that also functions as a "lung" and surrounded by muscles that act as pumps. Crinoids look rather like flowers, and use their feather-like arms to filter food particles out of the water; most live anchored to rocks, but a few can move very slowly. Other echinoderms are mobile and take a variety of body shapes, for example starfish, sea urchins and sea cucumbers. Although the name Chordata is attributed to William Bateson (1885), it was already in prevalent use by 1880. Ernst Haeckel described a taxon comprising tunicates, cephalochordates, and vertebrates in 1866. Though he used the German vernacular form, it is allowed under the ICZN code because of its subsequent latinization.
https://en.wikipedia.org/wiki?curid=5131
Charlize Theron Charlize Theron ( ; ; born 7 August 1975) is a South African and American actress and producer. She is the recipient of several accolades, including an Academy Award, a Golden Globe Award, and an American Cinematheque Award. "Time" magazine named her one of the 100 most influential people in the world in 2016. As of 2019, she is one of the world's highest-paid actresses. Theron came to international prominence in the 1990s by playing the leading lady in the Hollywood films "The Devil's Advocate" (1997), "Mighty Joe Young" (1998), and "The Cider House Rules" (1999). She received critical acclaim for her portrayal of serial killer Aileen Wuornos in "Monster" (2003), for which she won the Silver Bear and Academy Award for Best Actress, becoming the first South African to win an Oscar in an acting category. She received another Academy Award nomination for playing a sexually abused woman seeking justice in the drama "North Country" (2005). Theron has since starred in several commercially successful action films, including "Hancock" (2008), "Snow White and the Huntsman" (2012), "Prometheus" (2012), "" (2015), "The Fate of the Furious" (2017), and "Atomic Blonde" (2017). She also received praise for playing troubled women in Jason Reitman's comedy-dramas "Young Adult" (2011) and "Tully" (2018), and for portraying Megyn Kelly in the drama "Bombshell" (2019), receiving a third Academy Award nomination for the lattermost. Since the early 2000s, Theron has ventured into film production with her company Denver and Delilah Productions. She has produced numerous films, in many of which she had a starring role, including "The Burning Plain" (2008), "Dark Places" (2015), and "Long Shot" (2019). Theron became an American citizen in 2007, while retaining her South African citizenship. Theron was born in Benoni, in the then Transvaal Province (now Gauteng Province) of South Africa, the only child of road constructionists Gerda (born Maritz) and Charles Theron (27 November 1947 – 21 June 1991). Second Boer War military leader Danie Theron was her great-great-uncle. She is from an Afrikaner family, and her ancestry includes Dutch as well as French and German; her French forebears were early Huguenot settlers in South Africa. "Theron" is an Occitan surname (originally spelled Théron) pronounced in Afrikaans as . She grew up on her parents' farm in Benoni, near Johannesburg. On 21 June 1991, Theron's father, an alcoholic, threatened both teenaged Charlize and her mother while drunk, physically attacking her mother and firing a gun at both of them. Theron's mother retrieved her own handgun, shot back and killed him. The shooting was legally adjudged to have been self-defense, and her mother faced no charges. Theron attended Putfontein Primary School (Laerskool Putfontein), a period during which she has said she was not "fitting in". She was frequently unwell with jaundice throughout childhood and the antibiotics she was administered made her upper incisor milk teeth rot (they had to be surgically removed) and teeth did not grow until she was roughly ten-years-old. At 13, Theron was sent to boarding school and began her studies at the National School of the Arts in Johannesburg. Although Theron is fluent in English, her first language is Afrikaans. Although seeing herself as a dancer, at age 16 Theron won a one-year modelling contract at a local competition in Salerno and moved with her mother to Milan, Italy. After Theron spent a year modelling throughout Europe, she and her mother moved to the US, both New York City and Miami. In New York, she attended the Joffrey Ballet School, where she trained as a ballet dancer until a knee injury closed this career path. As Theron recalled in 2008: In 1994, Theron flew to Los Angeles, on a one-way ticket her mother bought for her, intending to work in the film industry. During the initial months there, she lived in a motel with the $300 budget that her mother had given her; she continued receiving cheques from New York and lived "from paycheck to paycheck" to the point of stealing bread from a basket in a restaurant to survive. One day, she went to a Hollywood Boulevard bank to cash a few cheques, including one her mother had sent to help with the rent, but it was rejected because it was out-of-state and she was not an American citizen. Theron argued and pleaded with the bank teller until talent agent John Crosby, who was the next customer behind her, cashed it for her and gave her his business card. Crosby introduced Theron to an acting school, and in 1995 she played her first non-speaking role in the horror film "". Her first speaking role was Helga Svelgen the hitwoman in "2 Days in the Valley" (1996), but despite the movie's mixed reviews, attention drew to Theron due to her beauty and the scene where she fought Teri Hatcher's character. Theron feared of being typecast as characters similar to Helga and recalled being asked to repeat her performance in the movie during auditions: "A lot of people were saying, 'You should just hit while the iron's hot'[...] But playing the same part over and over doesn't leave you with any longevity. And I knew it was going to be harder for me, because of what I look like, to branch out to different kinds of roles". When auditioning for "Showgirls", Theron was introduced to talent agent J. J. Harris by the co-casting director Johanna Ray. She recalled being surprised at how much faith Harris had in her potential and referred to Harris as her mentor. Harris would find scripts and movies for Theron in a variety of genres and encouraged her to become a producer. She would be Theron's agent for over 15 years until her death. Larger roles in widely released Hollywood films followed, and her career expanded by the end of the 1990s. In the horror drama "The Devil's Advocate" (1997), which is credited to be her break-out film, Theron starred alongside Keanu Reeves and Al Pacino as the haunted wife of an unusually successful lawyer. She subsequently starred in the adventure film "Mighty Joe Young" (1998) as the friend and protecter of a giant mountain gorilla, and in the drama "The Cider House Rules" (1999), as a woman who seeks an abortion in World War II-era Maine. While "Mighty Joe Young" flopped at the box office, "The Devil's Advocate" and "The Cider House Rules" were commercially successful. She was on the cover of the January 1999 issue of "Vanity Fair" as the "White Hot Venus". She also appeared on the cover of the May 1999 issue of "Playboy" magazine, in photos taken several years earlier when she was an unknown model; Theron unsuccessfully sued the magazine for publishing them without her consent. By the early 2000s, Theron continued to steadily take on roles in films such as "Reindeer Games" (2000), "The Yards" (2000), "The Legend of Bagger Vance" (2000), "Men of Honor" (2000), "Sweet November" (2001), "The Curse of the Jade Scorpion" (2001), and "Trapped" (2002), all of which, despite achieving only limited commercial success, helped to establish her as an actress. On this period in her career, Theron remarked: "I kept finding myself in a place where directors would back me but studios didn't. [I began] a love affair with directors, the ones I really, truly admired. I found myself making really bad movies, too. "Reindeer Games" was not a good movie, but I did it because I loved [director] John Frankenheimer." Theron starred as a safe and vault "technician" in the 2003 heist film "The Italian Job", an American homage/remake of the 1969 British film of the same name, directed by F. Gary Gray and opposite Mark Wahlberg, Edward Norton, Jason Statham, Seth Green, and Donald Sutherland. The film was a box office success, grossing US$176 million worldwide. In "Monster" (2003), Theron portrayed serial killer Aileen Wuornos, a former prostitute who was executed in Florida in 2002 for killing six men (she was not tried for a seventh murder) in the late 1980s and early 1990s; film critic Roger Ebert felt that Theron gave "one of the greatest performances in the history of the cinema". For her portrayal, she was awarded the Academy Award for Best Actress at the 76th Academy Awards in February 2004, as well as the Screen Actors Guild Award and the Golden Globe Award. She is the first South African to win an Oscar for Best Actress. The Oscar win pushed her to "The Hollywood Reporter's" 2006 list of highest-paid actresses in Hollywood, earning up to US$10 million for a film; she ranked seventh. "AskMen" also named her the number one most desirable woman of 2003. For her role of Swedish actress and singer Britt Ekland in the 2004 HBO film "The Life and Death of Peter Sellers", Theron garnered Golden Globe Award and Primetime Emmy Award nominations. In 2005, she portrayed Rita, the mentally challenged love interest of Michael Bluth (Jason Bateman), on the third season of Fox's television series "Arrested Development", and starred in the financially unsuccessful science fiction thriller "Aeon Flux"; for her voice-over work in the "Aeon Flux" video game, she received a Spike Video Game Award for Best Performance by a Human Female. In the critically acclaimed drama "North Country" (2005), Theron portrayed a single mother and an iron mine worker experiencing sexual harassment. David Rooney of "Variety" wrote: "The film represents a confident next step for lead Charlize Theron. Though the challenges of following a career-redefining Oscar role have stymied actresses, Theron segues from "Monster" to a performance in many ways more accomplished [...] The strength of both the performance and character anchor the film firmly in the tradition of other dramas about working-class women leading the fight over industrial workplace issues, such as "Norma Rae" or "Silkwood"." For her performance, she received Academy Award and Golden Globe Award nominations for Best Actress. "Ms." magazine also honoured her for this performance with a feature article in its Fall 2005 issue. On 30 September 2005, Theron received a star on the Hollywood Walk of Fame. In 2007, Theron played a police detective in the critically acclaimed crime film "In the Valley of Elah", and produced and starred as a reckless, slatternly mother in the little-seen drama film "Sleepwalking", alongside Nick Stahl and AnnaSophia Robb. "The Christian Science Monitor" praised the latter film, commenting that "Despite its deficiencies, and the inadequate screen time allotted to Theron (who's quite good), "Sleepwalking" has a core of feeling". In 2008, Theron starred as a woman who faced a traumatic childhood in the drama "The Burning Plain", directed by Guillermo Arriaga and opposite Jennifer Lawrence and Kim Basinger, and also played the ex-wife of an alcoholic superhero alongside Will Smith in the superhero film "Hancock". "The Burning Plain" found a limited release in theaters, but "Hancock" made US$624.3 million worldwide. Also in 2008, Theron was named the Hasty Pudding Theatricals Woman of the Year, and was asked to be a UN Messenger of Peace by the UN Secretary General Ban Ki-moon. Her film releases in 2009 were the post-apocalyptic drama "The Road", in which she briefly appeared in flashbacks, and the animated film "Astro Boy", providing her voice for a character. On 4 December 2009, Theron co-presented the draw for the 2010 FIFA World Cup in Cape Town, South Africa, accompanied by several other celebrities of South African nationality or ancestry. During rehearsals she drew an Ireland ball instead of France as a joke at the expense of FIFA, referring to Thierry Henry's handball controversy in the play-off match between France and Ireland. The stunt alarmed FIFA enough for it to fear she might do it again in front of a live global audience. Following a two-year hiatus from the big screen, Theron returned to the spotlight in 2011 with the black comedy "Young Adult". Directed by Jason Reitman, the film earned critical acclaim, particularly for her performance as a depressed divorced, alcoholic 37-year-old ghostwriter. Richard Roeper awarded the film an A grade, stating "Charlize Theron delivers one of the most impressive performances of the year". She was nominated for a Golden Globe Award and several other awards. In 2011, Theron spoke about her method of working on roles: "When I'm figuring out a character, for me it's easy, since once I say yes to something, I become super-obsessed about it – and I have an obsessive nature in general. How I want to play it starts at that moment. It's a very lonely, internal experience. I think about [the character] all the time – I observe things, I see things and file things [in my head], everything geared to what I'm going to do. I'm obsessed with the human condition. You read the script and become obsessed with [a character's] nature, her habits. When the camera rolls, it's time to do my job, to do the honest truth". In 2012, Theron took on the role of villain in two big-budgeted films. She played Evil Queen Ravenna, Snow White's evil stepmother, in "Snow White and the Huntsman", opposite Kristen Stewart and Chris Hemsworth, and appeared as a crew member with a hidden agenda in Ridley Scott's "Prometheus". Mick LaSalle of the "San Francisco Chronicle" found "Snow White and the Huntsman" to be "[a] slow, boring film that has no charm and is highlighted only by a handful of special effects and Charlize Theron's truly evil queen", while "The Hollywood Reporter" writer Todd McCarthy, describing her role in "Prometheus", asserted: "Theron is in ice goddess mode here, with the emphasis on ice [...] but perfect for the role all the same". Both films were major box office hits, grossing around US$400 million internationally each. In 2013, "Vulture"/"NYMag" named her the 68th Most Valuable Star in Hollywood saying: "We're just happy that Theron can stay on the list in a year when she didn't come out with anything [...] any actress who's got that kind of skill, beauty, and ferocity ought to have a permanent place in Hollywood". On 10 May 2014, Theron hosted "Saturday Night Live" on NBC. In 2014, Theron took on the role of the wife of an infamous sheepherder in the western comedy film "A Million Ways to Die in the West", directed by Seth MacFarlane, which was met with mediocre reviews and moderate box office returns. In 2015, Theron played the sole survivor of the massacre of her family in the film adaptation of the Gillian Flynn novel "Dark Places", directed by Gilles Paquet-Brenner, in which she had a producer credit, and starred as Imperator Furiosa in "" (2015), opposite Tom Hardy. "Mad Max" received widespread acclaim, with praise going towards Theron for the dominant nature taken by her character. The film made US$378.4 million worldwide. Theron reprised her role as Queen Ravenna in the 2016 film "", a sequel to "Snow White and the Huntsman", which was a critical and commercial failure. In 2016, Theron also starred as a physician and activist working in West Africa in the little-seen romantic drama "The Last Face", with Sean Penn, provided her voice for the 3D stop-motion fantasy film "Kubo and the Two Strings", and produced the independent drama "Brain on Fire". That year, "Time" named her in the Time 100 list of the most influential people in the world. In 2017, Theron starred in "The Fate of the Furious" as the main antagonist of the entire franchise, and played a spy on the eve of the collapse of the Berlin Wall in 1989 in "Atomic Blonde", an adaptation of the graphic novel "The Coldest City", directed by David Leitch. With a worldwide gross of US$1.2 billion, "The Fate of The Furious" became Theron's most widely seen film, and "Atomic Blonde" was described by Richard Roeper of the "Chicago Sun-Times" as "a slick vehicle for the magnetic, badass charms of Charlize Theron, who is now officially an A-list action star on the strength of this film and "Mad Max: Fury Road"". In the black comedy "Tully" (2018), directed by Jason Reitman and written by Diablo Cody, Theron played an overwhelmed mother of three. The film was acclaimed by critics, who concluded it "delves into the modern parenthood experience with an admirably deft blend of humor and raw honesty, brought to life by an outstanding performance by Charlize Theron". She also played the president of a pharmaceutical in the little-seen crime film "Gringo" and produced the biographical war drama film "A Private War", both released in 2018. In 2019, Theron starred and produced "Long Shot" directed by Jonathan Levine, portraying a U.S. Secretary of State who reconnects with a journalist she used to babysit in the romantic comedy film "Long Shot", opposite Seth Rogen. The film had its world premiere at South by Southwest in March 2019, and was released on 3 May 2019, to positive reviews from film critics. Theron next starred as Megyn Kelly in the drama "Bombshell", which she also co-produced. Directed by Jay Roach, the film revolves around the sexual harassment allegations made against Fox News CEO Roger Ailes by former female employees. For her work in the film, Theron was nominated for an Academy Award for Best Actress, Golden Globe Award for Best Actress in a Motion Picture – Drama, Critics' Choice Movie Award for Best Actress, Screen Actors Guild Award for Outstanding Performance by a Female Actor in a Leading Role, and BAFTA Award for Best Actress in a Leading Role. That year, "Forbes" ranked her as the ninth highest-paid actress in the world, with an annual income of $23 million. She will reprise her role as Cipher in "", originally set for release on 22 May 2020, before its delay to 2 April 2021 due to the COVID-19 pandemic. Theron will next star and produce "The Old Guard" directed by Gina Prince-Bythewood, opposite KiKi Layne for Netflix, which is to be released on 10 July 2020. The Charlize Theron Africa Outreach Project (CTAOP) was created in 2007 by Theron, who the following year was named a UN Messenger of Peace, in an effort to support African youth in the fight against HIV/AIDS. The project is committed to supporting community-engaged organizations that address the key drivers of the disease. Although the geographic scope of CTAOP is Sub-Saharan Africa, the primary concentration has mostly been Charlize's home country of South Africa. By November 2017, CTAOP had raised more than $6.3 million to support African organizations working on the ground. In 2008, Theron was named a United Nations Messenger of Peace. In his citation, Ban Ki-Moon said of Theron "You have consistently dedicated yourself to improving the lives of women and children in South Africa, and to preventing and stopping violence against women and girls". She recorded a public service announcement in 2014 as part of their Stop Rape Now program. In December 2009, CTAOP and TOMS Shoes partnered to create a limited edition unisex shoe. The shoe was made from vegan materials and inspired by the African baobab tree, the silhouette of which was embroidered on blue and orange canvas. Ten thousand pairs were given to destitute children, and a portion of the proceeds went to CTAOP. Theron is involved in women's rights organizations and has marched in pro-choice rallies. Theron also is a supporter of animal rights and active member of PETA. She appeared in a PETA ad for its anti-fur campaign. Theron is a supporter of same-sex marriage and attended a march and rally to support that in Fresno, California, on 30 May 2009. She publicly stated that she refused to get married until same sex marriage became legal in the United States, saying: "I don't want to get married because right now the institution of marriage feels very one-sided, and I want to live in a country where we all have equal rights. I think it would be exactly the same if we were married, but for me to go through that kind of ceremony, because I have so many friends who are gays and lesbians who would so badly want to get married, that I wouldn't be able to sleep with myself". Theron further elaborated on her stance in a June 2011 interview on "Piers Morgan Tonight". She stated: "I do have a problem with the fact that our government hasn't stepped up enough to make this federal, to make [gay marriage] legal. I think everybody has that right". In March 2014, CTAOP was among the charities that benefited from the annual Fame and Philanthropy fundraising event on the night of the 86th Academy Awards. Theron was an honoured guest along with Halle Berry and keynote speaker James Cameron. In 2015, Theron signed an open letter which One Campaign had been collecting signatures for; the letter was addressed to Angela Merkel and Nkosazana Dlamini-Zuma, urging them to focus on women as they serve as the head of the G7 in Germany and the AU in South Africa respectively, which will start to set the priorities in development funding before a main UN summit in September 2015 that will establish new development goals for the generation. In August 2018, she visited South Africa with Trevor Noah and made a donation to the South African charity Life Choices. In 2018, she gave a speech about AIDS prevention at the 22nd International AIDS Conference in Amsterdam, organized by the International AIDS Society. Having signed a deal with John Galliano in 2004, Theron replaced Estonian model Tiiu Kuik as the spokeswoman in the J'Adore advertisements by Christian Dior. From October 2005 to December 2006, Theron earned US$3 million for the use of her image in a worldwide print media advertising campaign for Raymond Weil watches. In February 2006, she and her production company were sued by Weil for breach of contract. The lawsuit was settled on 4 November 2008. In 2018, Theron joined Brad Pitt, Daniel Wu and Adam Driver as brand ambassadors for Breitling, dubbed the Breitling Cinema Squad. In 2007, Theron became a naturalized citizen of the United States, while retaining her South African citizenship. She lives in Los Angeles nearby the now-demolished bank where she met Crosby. Theron has adopted two children: Jackson in March 2012 and August in July 2015. She had been interested in adoption throughout her life because of her concern about overly-full orphanages in her childhood. In April 2019, Theron revealed that her seven-year-old child Jackson is a transgender girl. She stated, "They were born who they are and exactly where in the world both of them get to find themselves as they grow up, and who they want to be, is not for me to decide". Acting inspirations include Susan Sarandon and Sigourney Weaver. She has described her admiration for Tom Hanks as a "love affair" and watched many of his movies throughout her youth. Hollywood actors were never featured in magazines in South Africa so she never knew how famous he was until she moved to the United States, which has been inferred as a factor to her "down-to-earth" attitude to fame. After filming for "That Thing You Do!" finished, Theron got Hanks' autograph on her script. She later presented him his Cecil B. DeMille Award in 2020, in which Hanks revealed that he had a mutual admiration for Theron's career since the day he met her. Theron revealed in 2018 that she went to therapy in her 30s because she had unexplained anger, discovering that it was due to her frustration growing up during South Africa's apartheid, which ended when she was 15. Theron was in a three-year relationship with singer Stephan Jenkins until October 2001. Some of Third Eye Blind's third album, "Out of the Vein", explores the emotions Jenkins experienced as a result of their breakup. Theron began a relationship with Irish actor Stuart Townsend after meeting him on the set of the 2002 film "Trapped". The couple lived together in Los Angeles and Ireland. Theron split from Townsend in January 2010. In December 2013, Theron began dating American actor Sean Penn. The two were rumoured to be engaged in December 2014. Theron ended their relationship in June 2015. Since 2008, Theron has been officially recognized as a United Nations Messenger of Peace. Theron often quips that she has more injuries on sets that are not action films; however, while filming "Æon Flux" in Berlin Theron suffered a herniated disc in her neck, caused by a fall while filming a series of back handsprings. It required her to wear a neck brace for a month. Her thumb ligament tore during "The Old Guard" when her thumb caught in another actor's jacket during a fight scene, which required three operations and six months in a thumb brace. There were no major injuries during the filming of "Atomic Blonde" but she broke teeth from jaw clenching and had dental surgery to remove them: "I had the removal and I had to put a donor bone in there to heal until I came back, and then I had another surgery to put a metal screw in there." Outside of action movies, she had a herniated disk in her lower back as she filmed "Tully" and also suffered from a depression-like state, which she theorised was the result from the processed food she had to eat for her character's post-natal body. In July 2009, she was diagnosed with a serious stomach virus, thought to be contracted while overseas. While filming "The Road", Theron injured her vocal cords during the labour screaming scenes. On her first modelling job in Morocco, the camel she sat on smacked its head into her jaw, causing two dislocations. When promoting "Long Shot", she revealed that she laughed so hard at "Borat" that her neck locked for five days. Then she added that on the set of "Long Shot" she "ended up in the ER" after knocking her head against a bench behind her when she was putting on knee pads. As of early 2020, Theron's extensive film work has earned her 100 award nominations and 39 wins.
https://en.wikipedia.org/wiki?curid=5132
Chess Chess is a two-player strategy board game played on a checkered board with 64 squares arranged in an 8×8 grid. Played by millions of people worldwide, chess is believed to be derived from the Indian game "chaturanga" sometime before the 7th century. Chaturanga is also the likely ancestor of the East Asian strategy games "xiangqi" (Chinese chess), "janggi" (Korean chess), and shogi (Japanese chess). Chess reached Europe by the 9th century, due to the Umayyad conquest of Hispania. The pieces assumed their current properties in Spain in the late 15th century, and the modern rules were standardized in the 19th century. Play involves no hidden information. Each player begins with 16 pieces: one king, one queen, two rooks, two knights, two bishops, and eight pawns. Each piece type moves differently, with the most powerful being the queen and the least powerful the pawn. The objective is to checkmate the opponent's king by placing it under an inescapable threat of capture. To this end, a player's pieces are used to attack and capture the opponent's pieces, while supporting one another. During the game, play typically involves pieces for the opponent's similar pieces, and finding and engineering opportunities to trade advantageously or to get a better position. In addition to checkmate, a player wins the game if the opponent resigns, or, in a timed game, runs out of time. There are also several ways that a game can end in a draw. The first generally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886. Since 1948, the World Championship has been regulated by the Fédération Internationale des Échecs (FIDE), the game's international governing body. FIDE also awards life-time master titles to skilled players, the highest of which is Grandmaster (GM). Many national chess organizations have a title system of their own. FIDE also organizes the Women's World Championship, the World Junior Championship, the World Senior Championship, the Blitz and Rapid World Championships, and the Chess Olympiad, a popular competition among international teams. FIDE is a member of the International Olympic Committee, which can be considered recognition of chess as a sport. Several national sporting bodies (e.g. the Spanish "Consejo Superior de Deportes") also recognize chess as a sport. Chess was included in the 2006 and 2010 Asian Games. There is also a Correspondence Chess World Championship and a World Computer Chess Championship. Online chess has opened amateur and professional competition to a wide and varied group of players. Since the second half of the 20th century, chess engines have been programmed to play with increasing success, to the point where the strongest programs play at a higher level than the best human players. Since the 1990s, computer analysis has contributed significantly to chess theory, particularly in the endgame. The IBM computer Deep Blue was the first machine to overcome a reigning World Chess Champion in a match when it defeated Garry Kasparov in 1997. The rise of strong chess engines runnable on hand-held devices has led to increasing concern about cheating during tournaments. There are many variants of chess that utilize different rules, pieces, or boards. One of these, Fischer Random Chess, has gained widespread popularity and official FIDE recognition. The rules of chess are published by FIDE ("Fédération Internationale des Échecs"), chess's international governing body, in its "Handbook". Rules published by national governing bodies, or by unaffiliated chess organizations, commercial publishers, etc., may differ. FIDE's rules were most recently revised in 2018. By convention, chess game pieces are divided into white and black sets. Each set consists of 16 pieces: one king, one queen, two rooks, two bishops, two knights, and eight pawns. The pieces are set out as shown in the diagram and photo. The players of the sets are referred to as "White" and "Black", respectively. The game is played on a square board of eight rows (called ', denoted "1" to "8" from bottom to top according to White's perspective) and eight columns (called ', denoted "a" to "h" from left to right according to White's perspective). The 64 squares alternate in color and are referred to as "light" and "dark" squares. The chessboard is placed with a light square at the right-hand corner nearest to each player. Thus, each queen starts on a square of its own color (the white queen on a light square; the black queen on a dark square). In competitive games, the colors are allocated by the organizers; in informal games, the colors are usually decided randomly, for example by coin toss, or by one player's concealing a white and black pawn in either hand and having the opponent choose. White moves first, after which players alternate turns, moving one piece per turn (except for castling, when two pieces are moved). A piece is moved to either an unoccupied square or one occupied by an opponent's piece, which is captured and removed from play. With the sole exception of "en passant", all pieces capture by moving to the square that the opponent's piece occupies. Moving is compulsory; it is illegal to skip a turn, even when having to move is detrimental. A player may not make any move that would put or leave the player's own king in check. If the player to move has no legal move, the game is over; the result is either checkmate (a loss for the player with no legal move) if the king is in check, or stalemate (a draw) if the king is not. Each piece has its own way of moving. In the diagrams, the dots mark the squares to which the piece can move if there are no intervening piece(s) of either color (except the knight, which leaps over any intervening pieces). Once in every game, each king can make a special move, known as "castling". Castling consists of moving the king two squares along the first rank toward a rook (that is on the player's first rank and then placing the rook on the last square that the king just crossed. Castling is permissible if the following conditions are met: When a pawn makes a two-step advance from its starting position and there is an opponent's pawn on a square next to the destination square on an adjacent file, then the opponent's pawn can capture it "en passant" ("in passing"), moving to the square the pawn passed over. This can be done only on the very next turn; otherwise the right to do so is forfeited. For example, in the animated diagram, the black pawn advances two steps from g7 to g5, and the white pawn on f5 can take it "en passant" on g6 (but only on White's next move). When a pawn advances to the eighth rank, as a part of the move it is "promoted" and must be exchanged for the player's choice of queen, rook, bishop, or knight of the same color. Usually, the pawn is chosen to be promoted to a queen, but in some cases another piece is chosen; this is called underpromotion. In the animated diagram, the pawn on c7 can be advanced to the eighth rank and be promoted. There is no restriction on the piece promoted to, so it is possible to have more pieces of the same type than at the start of the game (e.g., two or more queens). When a king is under immediate attack by one or two of the opponent's pieces, it is said to be "in check". A move in response to a check is legal only if it results in a position where the king is no longer in check. This can involve capturing the checking piece; interposing a piece between the checking piece and the king (which is possible only if the attacking piece is a queen, rook, or bishop and there is a square between it and the king); or moving the king to a square where it is not under attack. Castling is not a permissible response to a check. The object of the game is to checkmate the opponent; this occurs when the opponent's king is in check, and there is no legal way to remove it from attack. It is never legal for a player to make a move that puts or leaves the player's own king in check. In casual games it is common to announce "check" when putting the opponent's king in check, but this is not required by the rules of chess, and is not usually done in tournaments. Games can be won in the following ways: There are several ways games can end in a draw: In competition, chess games are played with a time control. If a player's time runs out before the game is completed, the game is automatically lost (provided the opponent has enough pieces left to deliver checkmate). The duration of a game ranges from long (or "classical") games, which can take up to seven hours (even longer if adjournments are permitted), to bullet chess (under 3 minutes per player for the entire game). Intermediate between these are rapid chess games, lasting between 20 minutes and two hours per game, a popular time control in amateur weekend tournaments. Time is controlled using a chess clock that has two displays, one for each player's remaining time. Analog chess clocks have been largely replaced by digital clocks, which allow for time controls with increments. Time controls are also enforced in correspondence chess competition. A typical time control is 50 days for every 10 moves. Chess is believed to have originated in northwest India, in the Gupta Empire ( 280–550), where its early form in the 6th century was known as "chaturaṅga" (), literally "four divisions" [of the military] – infantry, cavalry, elephants, and chariotry, represented by the pieces that would evolve into the modern pawn, knight, bishop, and rook, respectively. Thence it spread eastward and westward along the Silk Road. The earliest evidence of chess is found in the nearby Sasanian Persia around 600, where the game came to be known by the name "chatrang". Chatrang was taken up by the Muslim world after the Islamic conquest of Persia (633–44), where it was then named "shatranj", with the pieces largely retaining their Persian names. In Spanish "shatranj" was rendered as "ajedrez" ("al-shatranj"), in Portuguese as "xadrez", and in Greek as ζατρίκιον ("zatrikion", which comes directly from the Persian "chatrang"), but in the rest of Europe it was replaced by versions of the Persian "shāh" ("king"), which was familiar as an exclamation and became the English words "check" and "chess". The word "checkmate" is derived from the Persian "shāh māt" ("the king is helpless"). The oldest archaeological chess artifacts, ivory pieces, were excavated in ancient Afrasiab, today's Samarkand, in Uzbekistan, central Asia, and date to about 760, with some of them possibly older. The oldest known chess manual was in Arabic and dates to 840–850, written by al-Adli ar-Rumi (800–870), a renowned Arab chess player, titled "Kitab ash-shatranj" (Book of the chess). This is a lost manuscript, but referenced in later works. The eastern migration of chess, into China and Southeast Asia, has even less documentation than its migration west. The first reference to Chinese chess, called "xiàngqí" , appears in a book entitled "Xuán guaì lù" ("Record of the Mysterious and Strange"), dating to about 800. Alternatively, some contend that chess arose from Chinese chess or one of its predecessors, although this has been contested. The game reached Western Europe and Russia by at least three routes, the earliest being in the 9th century. By the year 1000, it had spread throughout both Muslim Iberia and Latin Europe. A Latin poem "de scachis" dated to the late 10th century has been preserved in Einsiedeln Abbey. A famous 13th-century manuscript covering shatranj, backgammon, and dice is known as the "Libro de los juegos". Around 1200, the rules of shatranj started to be modified in southern Europe, and around 1475, several major changes made the game essentially as it is known today. These modern rules for the basic moves had been adopted in Italy and Spain. Pawns gained the option of advancing two squares on their first move, while bishops and queens acquired their modern abilities. The queen replaced the earlier vizier chess piece towards the end of the 10th century and by the 15th century had become the most powerful piece; consequently modern chess was referred to as "Queen's Chess" or "Mad Queen Chess". Castling, derived from the "kings leap" usually in combination with a pawn or rook move to bring the king to safety, was introduced. These new rules quickly spread throughout western Europe. Writings about the theory of how to play chess began to appear in the 15th century. The "Repetición de Amores y Arte de Ajedrez" ("Repetition of Love and the Art of Playing Chess") by Spanish churchman Luis Ramirez de Lucena was published in Salamanca in 1497. Lucena and later masters like Portuguese Pedro Damiano, Italians Giovanni Leonardo Di Bona, Giulio Cesare Polerio and Gioachino Greco, and Spanish bishop Ruy López de Segura developed elements of openings and started to analyze simple endgames. In the 18th century, the center of European chess life moved from the Southern European countries to France. The two most important French masters were François-André Danican Philidor, a musician by profession, who discovered the importance of pawns for chess strategy, and later Louis-Charles Mahé de La Bourdonnais, who won a famous series of matches with the Irish master Alexander McDonnell in 1834. Centers of chess activity in this period were coffee houses in major European cities like "Café de la Régence" in Paris and "Simpson's Divan" in London. The rules concerning stalemate were finalized in the early 19th century. Also in the 19th century, the convention that White moves first was established (formerly either White or Black could move first). Finally the rules around castling were standardized – variations in the castling rules had persisted in Italy until the late 19th century. The resulting standard game is sometimes referred to as "Western chess" or "international chess", particularly in Asia where other games of the chess family such as xiangqi are prevalent. Since the 19th century, the only rule changes have been technical in nature, for example establishing the correct procedure for claiming a draw by repetition. As the 19th century progressed, chess organization developed quickly. Many chess clubs, chess books, and chess journals appeared. There were correspondence matches between cities; for example, the London Chess Club played against the Edinburgh Chess Club in 1824. Chess problems became a regular part of 19th-century newspapers; Bernhard Horwitz, Josef Kling, and Samuel Loyd composed some of the most influential problems. In 1843, von der Lasa published his and Bilguer's "Handbuch des Schachspiels" ("Handbook of Chess"), the first comprehensive manual of chess theory. The first modern chess tournament was organized by Howard Staunton, a leading English chess player, and was held in London in 1851. It was won by the German Adolf Anderssen, who was hailed as the leading chess master. His brilliant, energetic attacking style was typical for the time. Sparkling games like Anderssen's Immortal Game and Evergreen Game or Morphy's "Opera Game" were regarded as the highest possible summit of the chess art. The romantic era was characterized by opening gambits (sacrificing pawns or even pieces), daring attacks, and brazen sacrifices. Many elaborate and beautiful but unsound move sequences called "combinations" were played by the masters of the time. The game was played more for art than theory. A profound belief that chess merit resided in the players' genius rather than inherent in the position on the board pervaded chess practice. Deeper insight into the nature of chess came with the American Paul Morphy, an extraordinary chess prodigy. Morphy won against all important competitors (except Staunton, who refused to play), including Anderssen, during his short chess career between 1857 and 1863. Morphy's success stemmed from a combination of brilliant attacks and sound strategy; he intuitively knew how to prepare attacks. Prague-born Wilhelm Steinitz beginning in 1873 described how to avoid weaknesses in one's own position and how to create and exploit such weaknesses in the opponent's position. The scientific approach and positional understanding of Steinitz revolutionized the game. Steinitz was the first to break a position down into its components. Before Steinitz, players brought their queen out early, did not completely their other pieces, and mounted a quick attack on the opposing king, which either succeeded or failed. The level of defense was poor and players did not form any deep plan. In addition to his theoretical achievements, Steinitz founded an important tradition: his triumph over the leading German master Johannes Zukertort in 1886 is regarded as the first official World Chess Championship. Steinitz lost his crown in 1894 to a much younger player, the German mathematician Emanuel Lasker, who maintained this title for 27 years, the longest tenure of any world champion. After the end of the 19th century, the number of master tournaments and matches held annually quickly grew. The first Olympiad was held in Paris in 1924, and FIDE was founded initially for the purpose of organizing that event. In 1927, the Women's World Chess Championship was established; the first to hold the title was Czech-English master Vera Menchik. A prodigy from Cuba, José Raúl Capablanca, known for his skill in endgames, won the World Championship from Lasker in 1921. Capablanca was undefeated in tournament play for eight years, from 1916 to 1924. His successor (1927) was the Russian-French Alexander Alekhine, a strong attacking player who died as the world champion in 1946. Alekhine briefly lost the title to Dutch player Max Euwe in 1935 and regained it two years later. Between the world wars, chess was revolutionized by the new theoretical school of so-called hypermodernists like Aron Nimzowitsch and Richard Réti. They advocated controlling the of the board with distant pieces rather than with pawns, thus inviting opponents to occupy the center with pawns, which become objects of attack. After the death of Alekhine, a new World Champion was sought. FIDE, which has controlled the title since then (except for one interruption), ran a tournament of elite players. The winner of the 1948 tournament was Russian Mikhail Botvinnik. In 1950 FIDE established a system of titles, conferring the titles of Grandmaster and International Master on 27 players. Some sources state that in 1914 the title of chess Grandmaster was first formally conferred by Tsar Nicholas II of Russia to Lasker, Capablanca, Alekhine, Tarrasch, and Marshall, but this is a disputed claim. Botvinnik started an era of Soviet dominance in the chess world. Until the end of the Soviet Union, there was only one non-Soviet champion, American Bobby Fischer (champion 1972–1975). Botvinnik revolutionized opening theory. Previously Black strove for equality, to neutralize White's first-move advantage. As Black, Botvinnik strove for the initiative from the beginning. In the previous informal system of World Championships, the current champion decided which challenger he would play for the title and the challenger was forced to seek sponsors for the match. FIDE set up a new system of qualifying tournaments and matches. The world's strongest players were seeded into Interzonal tournaments, where they were joined by players who had qualified from Zonal tournaments. The leading finishers in these Interzonals would go on the "Candidates" stage, which was initially a tournament, and later a series of knockout matches. The winner of the Candidates would then play the reigning champion for the title. A champion defeated in a match had a right to play a rematch a year later. This system operated on a three-year cycle. Botvinnik participated in championship matches over a period of fifteen years. He won the world championship tournament in 1948 and retained the title in tied matches in 1951 and 1954. In 1957, he lost to Vasily Smyslov, but regained the title in a rematch in 1958. In 1960, he lost the title to the 23-year-old Latvian prodigy Mikhail Tal, an accomplished tactician and attacking player. Botvinnik again regained the title in a rematch in 1961. Following the 1961 event, FIDE abolished the automatic right of a deposed champion to a rematch, and the next champion, Armenian Tigran Petrosian, a player renowned for his defensive and positional skills, held the title for two cycles, 1963–1969. His successor, Boris Spassky from Russia (champion 1969–1972), won games in both positional and sharp tactical style. The next championship, the so-called Match of the Century, saw the first non-Soviet challenger since World War II, American Bobby Fischer, who defeated his Candidates opponents by unheard-of margins and clearly won the world championship match. In 1975, however, Fischer refused to defend his title against Soviet Anatoly Karpov when FIDE did not meet his demands, and Karpov obtained the title by default. Fischer modernized many aspects of chess, especially by extensively preparing openings. Karpov defended his title twice against Viktor Korchnoi and dominated the 1970s and early 1980s with a string of tournament successes. Karpov's reign finally ended in 1985 at the hands of Garry Kasparov, another Soviet player from Baku, Azerbaijan. Kasparov and Karpov contested five world title matches between 1984 and 1990; Karpov never won his title back. In 1993, Garry Kasparov and Nigel Short broke with FIDE to organize their own match for the title and formed a competing Professional Chess Association (PCA). From then until 2006, there were two simultaneous World Champions and World Championships: the PCA or Classical champion extending the Steinitzian tradition in which the current champion plays a challenger in a series of many games, and the other following FIDE's new format of many players competing in a tournament to determine the champion. Kasparov lost his Classical title in 2000 to Vladimir Kramnik of Russia. The World Chess Championship 2006, in which Kramnik beat the FIDE World Champion Veselin Topalov, reunified the titles and made Kramnik the undisputed World Chess Champion. In September 2007, he lost the title to Viswanathan Anand of India, who won the championship tournament in Mexico City. Anand defended his title in the revenge match of 2008, 2010 and 2012. In 2013, Magnus Carlsen beat Anand in the 2013 World Chess Championship. He defended his title the following year, again against Anand. Carlsen confirmed his title in 2016 against the Russian Sergey Karjakin and in 2018 against the American Fabiano Caruana, in both occasions by a rapid tiebreaker match after equality in 12 games of classical time control, and is the reigning world champion. In the Middle Ages and during the Renaissance, chess was a part of noble culture; it was used to teach war strategy and was dubbed the "King's Game". Gentlemen are "to be meanly seene in the play at Chestes", says the overview at the beginning of Baldassare Castiglione's "The Book of the Courtier" (1528, English 1561 by Sir Thomas Hoby), but chess should not be a gentleman's main passion. Castiglione explains it further: And what say you to the game at chestes? It is truely an honest kynde of enterteynmente and wittie, quoth Syr Friderick. But me think it hath a fault, whiche is, that a man may be to couning at it, for who ever will be excellent in the playe of chestes, I beleave he must beestowe much tyme about it, and applie it with so much study, that a man may assoone learne some noble scyence, or compase any other matter of importaunce, and yet in the ende in beestowing all that laboure, he knoweth no more but a game. Therfore in this I beleave there happeneth a very rare thing, namely, that the meane is more commendable, then the excellency. Many of the elaborate chess sets used by the aristocracy have been lost, but others partially survive, such as the Lewis chessmen. Chess was often used as a basis of sermons on morality. An example is "Liber de moribus hominum et officiis nobilium sive super ludo scacchorum" ('Book of the customs of men and the duties of nobles or the Book of Chess'), written by an Italian Dominican monk Jacobus de Cessolis . This book was one of the most popular of the Middle Ages. The work was translated into many other languages (the first printed edition was published at Utrecht in 1473) and was the basis for William Caxton's "The Game and Playe of the Chesse" (1474), one of the first books printed in English. Different chess pieces were used as metaphors for different classes of people, and human duties were derived from the rules of the game or from visual properties of the chess pieces: The knyght ought to be made alle armed upon an hors in suche wyse that he haue an helme on his heed and a spere in his ryght hande/ and coueryd wyth his sheld/ a swerde and a mace on his lyft syde/ Cladd wyth an hawberk and plates to fore his breste/ legge harnoys on his legges/ Spores on his heelis on his handes his gauntelettes/ his hors well broken and taught and apte to bataylle and couerid with his armes/ whan the knyghtes ben maad they ben bayned or bathed/ that is the signe that they shold lede a newe lyf and newe maners/ also they wake alle the nyght in prayers and orysons vnto god that he wylle gyue hem grace that they may gete that thynge that they may not gete by nature/ The kynge or prynce gyrdeth a boute them a swerde in signe/ that they shold abyde and kepe hym of whom they take theyr dispenses and dignyte. Known in the circles of clerics, students, and merchants, chess entered into the popular culture of Middle Ages. An example is the 209th song of Carmina Burana from the 13th century, which starts with the names of chess pieces, "Roch, pedites, regina..." During the Age of Enlightenment, chess was viewed as a means of self-improvement. Benjamin Franklin, in his article "The Morals of Chess" (1750), wrote: The Game of Chess is not merely an idle amusement; several very valuable qualities of the mind, useful in the course of human life, are to be acquired and strengthened by it, so as to become habits ready on all occasions; for life is a kind of Chess, in which we have often points to gain, and competitors or adversaries to contend with, and in which there is a vast variety of good and ill events, that are, in some degree, the effect of prudence, or the want of it. By playing at Chess then, we may learn: I. Foresight, which looks a little into futurity, and considers the consequences that may attend an action [...] II. Circumspection, which surveys the whole Chess-board, or scene of action: – the relation of the several Pieces, and their situations [...] III. Caution, not to make our moves too hastily [...] Chess was occasionally criticized in the 19th century as a waste of time. Chess is taught to children in schools around the world today. Many schools host chess clubs, and there are many scholastic tournaments specifically for children. Tournaments are held regularly in many countries, hosted by organizations such as the United States Chess Federation and the National Scholastic Chess Foundation. Chess is often depicted in the arts; significant works where chess plays a key role range from Thomas Middleton's "A Game at Chess" to "Through the Looking-Glass" by Lewis Carroll, to Vladimir Nabokov's "The Defense", to "The Royal Game" by Stefan Zweig. Chess is featured in films like Ingmar Bergman's "The Seventh Seal" and Satyajit Ray's "The Chess Players". Chess is also present in contemporary popular culture. For example, the characters in "Star Trek" play a futuristic version of the game called "Tri-Dimensional Chess". "Wizard's Chess" is featured in J.K. Rowling's "Harry Potter" plays. The hero of "Searching for Bobby Fischer" struggles against adopting the aggressive and misanthropic views of a world chess champion. Chess is used as the core theme in the musical "Chess" by Tim Rice, Björn Ulvaeus, and Benny Andersson. The thriller film "Knight Moves" is about a chess grandmaster who is accused of being a serial killer. "Pawn Sacrifice", starring Tobey Maguire as Bobby Fischer and Liev Schreiber as Boris Spassky, depicts the drama surrounding the 1972 World Chess Championship in Iceland during the Cold War. The game of chess, at times, has been discouraged by various religious authorities, including Jewish, Christian and Muslim. Jewish scholars Maimonides and Kalonymus ben Kalonymus both condemned chess, though the former only condemned it when played for money while the latter condemned it in all circumstances. In medieval times both the Catholic and Orthodox churches condemned chess. Though the 16th century Russian Orthodox "Domostroy" condemned the game, chess nevertheless remained popular in Russia. In 1979, Iranian Ayatollah Ruhollah Khomeini ruled against chess, but later allowed it as long as it did not involve gambling. Iran now has an active confederation for playing chess and sends players to international events. Saudi Mufti Abdul-Aziz ash-Sheikh similarly ruled against chess, arguing that it constituted gambling. Iraqi Ayatollah Ali al-Sistani said chess was forbidden "even without placing a bet”. Chess games and positions are recorded using a system of notation, most commonly algebraic chess notation. Abbreviated algebraic (or short algebraic) notation generally records moves in the format: The pieces are identified by their initials. In English, these are "K" (king), "Q" (queen), "R" (rook), "B" (bishop), and "N" (knight; N is used to avoid confusion with king). For example, Qg5 means "queen moves to the g-file, 5th rank" (that is, to the square g5). Chess literature published in other languages may use different initials for pieces, or figurine algebraic notation (FAN) may be used to avoid language issues. To resolve ambiguities, an additional letter or number is added to indicate the file or rank from which the piece moved (e.g. Ngf3 means "knight from the g-file moves to the square f3"; R1e2 means "rook on the first rank moves to e2"). The letter "P" for pawn is not used; so e4 means "pawn moves to the square e4". If the piece makes a capture, "x" is inserted before the destination square. Thus Bxf3 means "bishop captures on f3". When a pawn makes a capture, the file from which the pawn departed is used in place of a piece initial, and ranks may be omitted if unambiguous. For example, exd5 (pawn on the e-file captures the piece on d5) or exd (pawn on the e-file captures a piece somewhere on the d-file). Particularly in Germany, some publications use ":" rather than "x" to indicate capture, but this is now rare. Some publications omit the capture symbol altogether; so exd5 would be rendered simply as ed. If a pawn moves to its last rank, achieving promotion, the piece chosen is indicated after the move (for example, e1Q or e1=Q). Castling is indicated by the special notations 0-0 for castling and 0-0-0 for castling. An "en passant" capture is sometimes marked with the notation "e.p." A move that places the opponent's king in check usually has the notation "+" added (the notation "++" for a double check is considered obsolete). Checkmate can be indicated by "#". At the end of the game, "1–0" means White won, "0–1" means Black won, and "½–½" indicates a draw. Chess moves can be annotated with punctuation marks and other symbols. (For example: "!" indicates a good move; "!!" an excellent move; "?" a mistake; "??" a blunder; "!?" an interesting move that may not be best; or "?!" a dubious move not easily refuted.) For example, one variation of a simple trap known as the Scholar's mate (see animated diagram) can be recorded: The text-based Portable Game Notation (PGN), which is understood by chess software, is based on short form English language algebraic notation. Until about 1980, the majority of English language chess publications used a form of descriptive notation. In descriptive notation, files are named according to the piece which occupies the back rank at the start of the game, and each square has two different names depending on whether it is from White's or Black's point of view. For example, the square known as "e3" in algebraic notation is "K3" (King's 3rd) from White's point of view, and "K6" (King's 6th) from Black's point of view. When recording captures, the captured piece is named rather than the square on which it is captured (except to resolve ambiguities). Thus, Scholar's mate is rendered in descriptive notation: A few players still prefer descriptive notation, but it is no longer recognized by FIDE. Another system is ICCF numeric notation, recognized by the International Correspondence Chess Federation though its use is in decline. Squares are identified by numeric coordinates, for example a1 is "11" and h8 is "88". Moves are described by the "from" and "to" squares, e.g. the opening move 1.e4 is rendered as 1.5254. Captures are not indicated. Castling is described by the king's move only; e.g. 5171 for White castling kingside, 5838 for Black castling queenside. A game of chess can generally be loosely subdivided into three phases or stages of play: the "opening", followed by the "middlegame", then last the "endgame". A chess opening is the group of initial moves of a game (the "opening moves"). Recognized sequences of opening moves are referred to as "openings" and have been given names such as the Ruy Lopez or Sicilian Defense. They are catalogued in reference works such as the "Encyclopaedia of Chess Openings". There are dozens of different openings, varying widely in character from quiet (for example, the Réti Opening) to very aggressive (the Latvian Gambit). In some opening lines, the exact sequence considered best for both sides has been worked out to more than 30 moves. Professional players spend years studying openings and continue doing so throughout their careers, as opening theory continues to evolve. The fundamental strategic aims of most openings are similar: Most players and theoreticians consider that White, by virtue of the first move, begins the game with a small advantage. This initially gives White the initiative. Black usually strives to neutralize White's advantage and achieve , or to develop in an unbalanced position. The middlegame is the part of the game which starts after the opening. There is no clear line between the opening and the middlegame, but typically the middlegame will start when most pieces have been developed. (Similarly, there is no clear transition from the middlegame to the endgame; see start of the endgame.) Because the opening theory has ended, players have to form plans based on the features of the position, and at the same time take into account the tactical possibilities of the position. The middlegame is the phase in which most combinations occur. Combinations are a series of tactical moves executed to achieve some gain. Middlegame combinations are often connected with an attack against the opponent's king. Some typical patterns have their own names; for example, the Boden's Mate or the Lasker–Bauer combination. Specific plans or strategic themes will often arise from particular groups of openings which result in a specific type of pawn structure. An example is the , which is the attack of queenside pawns against an opponent who has more pawns on the queenside. The study of openings is therefore connected to the preparation of plans that are typical of the resulting middlegames. Another important strategic question in the middlegame is whether and how to reduce material and transition into an endgame (i.e. ). Minor material advantages can generally be transformed into victory only in an endgame, and therefore the stronger side must choose an appropriate way to achieve an ending. Not every reduction of material is good for this purpose; for example, if one side keeps a light-squared bishop and the opponent has a dark-squared one, the transformation into a bishops and pawns ending is usually advantageous for the weaker side only, because an endgame with bishops on opposite colors is likely to be a draw, even with an advantage of a pawn, or sometimes even with a two-pawn advantage. The endgame (also "end game" or "ending") is the stage of the game when there are few pieces left on the board. There are three main strategic differences between earlier stages of the game and the endgame: Endgames can be classified according to the type of pieces remaining on the board. Basic checkmates are positions in which one side has only a king and the other side has one or two pieces and can checkmate the opposing king, with the pieces working together with their king. For example, king and pawn endgames involve only kings and pawns on one or both sides, and the task of the stronger side is to promote one of the pawns. Other more complicated endings are classified according to pieces on the board other than kings, such as "rook and pawn versus rook" endgames. Chess strategy consists of setting and achieving long-term positioning advantages during the game – for example, where to place different pieces – while tactics concentrate on immediate maneuver. These two aspects of the gameplay cannot be completely separated, because strategic goals are mostly achieved through tactics, while the tactical opportunities are based on the previous strategy of play. A game of chess is normally divided into three phases: the opening, typically the first 10 moves, when players move their pieces to useful positions for the coming battle; the middlegame; and last the endgame, when most of the pieces are gone, kings typically take a more active part in the struggle, and pawn promotion is often decisive. In chess, tactics in general concentrate on short-term actions – so short-term that they can be calculated in advance by a human player or a computer. The possible depth of calculation depends on the player's ability. In positions with many possibilities on both sides, a deep calculation is more difficult and may not be practical, while in positions with a limited number of variations, strong players can calculate long sequences of moves. Theoreticians describe many elementary tactical methods and typical maneuvers, for example: pins, forks, skewers, batteries, discovered attacks (especially discovered checks), zwischenzugs, deflections, decoys, sacrifices, underminings, overloadings, and interferences. Simple one-move or two-move tactical actions – threats, exchanges of , and double attacks – can be combined into more complicated sequences of tactical maneuvers that are often forced from the point of view of one or both players. A forced variation that involves a sacrifice and usually results in a tangible gain is called a "combination". Brilliant combinations – such as those in the Immortal Game – are considered beautiful and are admired by chess lovers. A common type of chess exercise, aimed at developing players' skills, is a position where a decisive combination is available and the challenge is to find it. Chess strategy is concerned with evaluation of chess positions and with setting up goals and long-term plans for the future play. During the evaluation, players must take into account numerous factors such as the value of the pieces on the board, control of the center and centralization, the pawn structure, king safety, and the control of key squares or groups of squares (for example, diagonals, open files, and dark or light squares). The most basic step in evaluating a position is to count the total value of pieces of both sides. The point values used for this purpose are based on experience; usually pawns are considered worth one point, knights and bishops about three points each, rooks about five points (the value difference between a rook and a bishop or knight being known as the exchange), and queens about nine points. The king is more valuable than all of the other pieces combined, since its checkmate loses the game. But in practical terms, in the endgame the king as a fighting piece is generally more powerful than a bishop or knight but less powerful than a rook. These basic values are then modified by other factors like position of the piece (e.g. advanced pawns are usually more valuable than those on their initial squares), coordination between pieces (e.g. a pair of bishops usually coordinate better than a bishop and a knight), or the type of position (e.g. knights are generally better in with many pawns while bishops are more powerful in ). Another important factor in the evaluation of chess positions is the "pawn structure" (sometimes known as the "pawn skeleton"): the configuration of pawns on the chessboard. Since pawns are the least mobile of the pieces, the pawn structure is relatively static and largely determines the strategic nature of the position. Weaknesses in the pawn structure, such as isolated, doubled, or backward pawns and , once created, are often permanent. Care must therefore be taken to avoid these weaknesses unless they are compensated by another valuable asset (for example, by the possibility of developing an attack). Contemporary chess is an organized sport with structured international and national leagues, tournaments, and congresses. Chess's international governing body is FIDE (Fédération Internationale des Échecs). Most countries have a national chess organization as well (such as the US Chess Federation and English Chess Federation) which in turn is a member of FIDE. FIDE is a member of the International Olympic Committee, but the game of chess has never been part of the Olympic Games; chess has its own Olympiad, held every two years as a team event. The current World Chess Champion is Magnus Carlsen of Norway. The reigning Women's World Champion is Ju Wenjun from China. Other competitions for individuals include the World Junior Chess Championship, the European Individual Chess Championship, and the National Chess Championships. Invitation-only tournaments regularly attract the world's strongest players. Examples include Spain's Linares event, Monte Carlo's Melody Amber tournament, the Dortmund Sparkassen meeting, Sofia's M-tel Masters, and Wijk aan Zee's Tata Steel tournament. Regular team chess events include the Chess Olympiad and the European Team Chess Championship. The World Chess Solving Championship and World Correspondence Chess Championships include both team and individual events. Besides these prestigious competitions, there are thousands of other chess tournaments, matches, and festivals held around the world every year catering to players of all levels. Chess is promoted as a "mind sport" by the Mind Sports Organisation, alongside other mental-skill games such as contract bridge, Go, and "Scrabble". The best players can be awarded specific lifetime titles by the world chess organization FIDE: All the titles are open to men and women. Separate women-only titles, such as Woman Grandmaster (WGM), are available. Beginning with Nona Gaprindashvili in 1978, a number of women have earned the GM title, and as of 2020, all of the top ten rated women hold the unrestricted GM title. , there are 1725 active grandmasters and 3903 international masters in the world. The top three countries with the largest numbers of grandmasters are Russia, the United States, and Germany, with 251, 98, and 96, respectively. FIDE also awards titles for arbiters and trainers. International titles are awarded to composers and solvers of chess problems and to correspondence chess players (by the International Correspondence Chess Federation). National chess organizations may also award titles, usually to the advanced players still under the level needed for international titles; an example is the chess expert title used in the United States. In order to rank players, FIDE, ICCF, and national chess organizations use the Elo rating system developed by Arpad Elo. Elo is a statistical system based on the assumption that the chess performance of each player in his or her games is a random variable. Arpad Elo thought of a player's true skill as the average of that player's performance random variable, and showed how to estimate the average from results of player's games. The US Chess Federation implemented Elo's suggestions in 1960, and the system quickly gained recognition as being both fairer and more accurate than older systems; it was adopted by FIDE in 1970. A beginner or casual player typically has an Elo rating of less than 1000; an ordinary club player has a rating of about 1500, a strong club player about 2000, a grandmaster usually has a rating of over 2500, and an elite player has a rating of over 2700. The highest FIDE rating of all time, 2881, was achieved by Magnus Carlsen on the March 2014 FIDE rating list. Chess composition is the art of creating chess problems (also called chess compositions). The creator is known as a chess composer. There are many types of chess problems; the two most important are: Chess composition is a distinct branch of chess sport, and tournaments exist for both the composition and solving of chess problems. This is one of the most famous chess studies; it was published by Richard Réti 4 December 1921. It seems impossible to catch the advanced black pawn, while the black king can easily stop the white pawn. The solution is a diagonal advance, which brings the king to "both" pawns simultaneously: Or 2...h3 3.Ke7 and the white king can support its pawn. Now the white king comes just in time to support his pawn, or catch the black one. If 3...Kxc6, 4.Kf4 and White will capture the pawn. Both sides will queen, resulting in a draw. Chess has an extensive literature. In 1913, the chess historian H.J.R. Murray estimated the total number of books, magazines, and chess columns in newspapers to be about 5,000. B.H. Wood estimated the number, as of 1949, to be about 20,000. David Hooper and Kenneth Whyld write that, "Since then there has been a steady increase year by year of the number of new chess publications. No one knows how many have been printed." There are two significant public chess libraries: the John G. White Chess and Checkers Collection at Cleveland Public Library, with over 32,000 chess books and over 6,000 bound volumes of chess periodicals; and the Chess & Draughts collection at the National Library of the Netherlands, with about 30,000 books. GM Lothar Schmid owned the world's largest private collection of chess books and memorabilia. David DeLucia's chess library contains 7,000 to 8,000 chess books, a similar number of autographs (letters, score sheets, manuscripts), and about 1,000 items of "ephemera". Dirk Jan ten Geuzendam opines that DeLucia's collection "is arguably the finest chess collection in the world". The game structure and nature of chess are related to several branches of mathematics. Many combinatorical and topological problems connected to chess have been known for hundreds of years. The number of legal positions in chess is estimated to be about 1043, and has been proved to be fewer than 1047, with a game-tree complexity of approximately 10123. The game-tree complexity of chess was first calculated by Claude Shannon as 10120, a number known as the Shannon number. An average position typically has thirty to forty possible moves, but there may be as few as zero (in the case of checkmate or stalemate) or (in a constructed position) as many as 218. Chess has inspired many combinatorial puzzles, such as the knight's tour and the eight queens puzzle. The idea of creating a chess-playing machine dates to the 18th century; around 1769, the chess-playing automaton called The Turk became famous before being exposed as a hoax. Serious trials based on automata, such as El Ajedrecista, were too complex and limited to be useful. Since the advent of the digital computer in the 1950s, chess enthusiasts, computer engineers, and computer scientists have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. The groundbreaking paper on computer chess, "Programming a Computer for Playing Chess", was published in 1950 by Shannon. He wrote: The chess machine is an ideal one to start with, since: (1) the problem is sharply defined both in allowed operations (the moves) and in the ultimate goal (checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution; (3) chess is generally considered to require "thinking" for skillful play; a solution of this problem will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of "thinking"; (4) the discrete structure of chess fits well into the digital nature of modern computers. The Association for Computing Machinery (ACM) held the first major chess tournament for computers, the North American Computer Chess Championship, in September 1970. CHESS 3.0, a chess program from Northwestern University, won the championship. Nowadays, chess programs compete in the World Computer Chess Championship, held annually since 1974. At first considered only a curiosity, the best chess playing programs have become extremely strong. In 1997, a computer won a chess match using classical time controls against a reigning World Champion for the first time: IBM's Deep Blue beat Garry Kasparov 3½–2½ (it scored two wins, one loss, and three draws). However, the match was controversial, and computers would only win such a match again in 2006. In 2009, a mobile phone won a category 6 tournament with a performance rating 2898: chess engine Hiarcs 13 running on the mobile phone HTC Touch HD won the Copa Mercosur tournament with nine wins and one draw. The best chess programs are now able to consistently beat the strongest human players, to the extent that human–computer matches no longer attract interest from chess players or media. With huge databases of past games and high analytical ability, computers can help players to learn chess and prepare for matches. Internet Chess Servers allow people to find and play opponents worldwide. The presence of computers and modern communication tools have raised concerns regarding cheating during games, most notably the "bathroom controversy" during the 2006 World Championship. In 1913, Ernst Zermelo used chess as a basis for his theory of game strategies, which is considered as one of the predecessors of game theory. Zermelo's theorem states that it is possible to solve chess, i.e. to determine with certainty the outcome of a perfectly played game (either White can force a win, or Black can force a win, or both sides can force at least a draw). According to Claude Shannon, however, there are 1043 legal positions in chess, so it will take an impossibly long time to compute a perfect strategy with any feasible technology. The 11-category, game theoretical taxonomy of chess includes: two player, no-chance, combinatorial, Markov state (present state is all a player needs to move; although past state led up to that point, knowledge of the sequence of past moves is not required to make the next move, except to take into account of "en passant" and castling, which "do" depend on the past moves), zero sum, symmetric, perfect information, non-cooperative, discrete, extensive form (tree decisions, not payoff matrices), and sequential. Generalized chess (played on "n"×"n" board, without the fifty-move rule) is EXPTIME-complete. Some applications of combinatorial game theory to chess endgames were found by Elkies (1996). There is an extensive scientific literature on chess psychology. Alfred Binet and others showed that knowledge and verbal, rather than visuospatial, ability lies at the core of expertise. In his doctoral thesis, Adriaan de Groot showed that chess masters can rapidly perceive the key features of a position. According to de Groot, this perception, made possible by years of practice and study, is more important than the sheer ability to anticipate moves. De Groot showed that chess masters can memorize positions shown for a few seconds almost perfectly. The ability to memorize does not alone account for chess-playing skill, since masters and novices, when faced with random arrangements of chess pieces, had equivalent recall (about six positions in each case). Rather, it is the ability to recognize patterns, which are then memorized, which distinguished the skilled players from the novices. When the positions of the pieces were taken from an actual game, the masters had almost total positional recall. More recent research has focused on chess as mental training; the respective roles of knowledge and look-ahead search; brain imaging studies of chess masters and novices; blindfold chess; the role of personality and intelligence in chess skill; gender differences; and computational models of chess expertise. The role of practice and talent in the development of chess and other domains of expertise has led to much recent research. Ericsson and colleagues have argued that deliberate practice is sufficient for reaching high levels of expertise in chess. Recent research indicates that factors other than practice are also important. For example, Fernand Gobet and colleagues have shown that stronger players started playing chess at a young age and that experts born in the Northern Hemisphere are more likely to have been born in late winter and early spring. Compared to general population, chess players are more likely to be non-right-handed, though they found no correlation between handedness and skill. A relationship between chess skill and intelligence has long been discussed in the literature and popular culture. Academic studies of the relationship date back at least to 1927. Academic opinion has long been split on how strong the relationship is, with some studies finding no relationship and others finding a relatively strong one. A 2016 meta-analysis and review based on 19 studies and a total sample size of 1,779 found that various aspects of general intelligence correlate with chess skill, with average correlations ranging from 0.13 (visuospatial ability) to 0.35 (numerical ability). The review did not find strong evidence of publication bias biasing these estimates. Moderator analyses indicated that the relationship was stronger in unranked players (r = 0.32) vs. ranked players (r = 0.14), as well as stronger in children (r = 0.32) than adults (r = 0.11). There are more than two thousand published chess variants, most of them of relatively recent origin, including: Prime sources in English describing chess variants and their rules include David Pritchard's encyclopedias, the website "The Chess Variant Pages" created by Hans Bodlaender with various contributors, and the magazine "Variant Chess" published from 1990 (George Jellis) to 2010 (the British Chess Variants Society). In the context of chess variants, regular (i.e. FIDE) chess is commonly referred to as "Western chess", "international chess", "orthodox chess", "orthochess", and "classic chess". Reference aids Lists
https://en.wikipedia.org/wiki?curid=5134
Charlie Chaplin Sir Charles Spencer Chaplin (16 April 1889 – 25 December 1977) was an English comic actor, filmmaker, and composer who rose to fame in the era of silent film. He became a worldwide icon through his screen persona, "The Tramp", and is considered one of the most important figures in the history of the film industry. His career spanned more than 75 years, from childhood in the Victorian era until a year before his death in 1977, and encompassed both adulation and controversy. Chaplin's childhood in London was one of poverty and hardship, as his father was absent and his mother struggled financially, and he was sent to a workhouse twice before the age of nine. When he was 14, his mother was committed to a mental asylum. Chaplin began performing at an early age, touring music halls and later working as a stage actor and comedian. At 19, he was signed to the prestigious Fred Karno company, which took him to America. He was scouted for the film industry and began appearing in 1914 for Keystone Studios. He soon developed the Tramp persona and formed a large fan base. He directed his own films and continued to hone his craft as he moved to the Essanay, Mutual, and First National corporations. By 1918, he was one of the best-known figures in the world. In 1919, Chaplin co-founded the distribution company United Artists, which gave him complete control over his films. His first feature-length film was "The Kid" (1921), followed by "A Woman of Paris" (1923), "The Gold Rush" (1925), and "The Circus" (1928). He initially refused to move to sound films in the 1930s, instead producing "City Lights" (1931) and "Modern Times" (1936) without dialogue. He became increasingly political, and his first sound film was "The Great Dictator" (1940), which satirised Adolf Hitler. The 1940s were a decade marked with controversy for Chaplin, and his popularity declined rapidly. He was accused of communist sympathies, and some members of the press and public found his involvement in a paternity suit, and marriages to much younger women, scandalous. An FBI investigation was opened, and Chaplin was forced to leave the United States and settle in Switzerland. He abandoned the Tramp in his later films, which include "Monsieur Verdoux" (1947), "Limelight" (1952), "A King in New York" (1957), and "A Countess from Hong Kong" (1967). Chaplin wrote, directed, produced, edited, starred in, and composed the music for most of his films. He was a perfectionist, and his financial independence enabled him to spend years on the development and production of a picture. His films are characterised by slapstick combined with pathos, typified in the Tramp's struggles against adversity. Many contain social and political themes, as well as autobiographical elements. He received an Honorary Academy Award for "the incalculable effect he has had in making motion pictures the art form of this century" in 1972, as part of a renewed appreciation for his work. He continues to be held in high regard, with "The Gold Rush", "City Lights", "Modern Times", and "The Great Dictator" often ranked on lists of the greatest films of all time. Charles Spencer Chaplin was born on 16 April 1889 to Hannah Chaplin (born Hannah Harriet Pedlingham Hill) and Charles Chaplin Sr. There is no official record of his birth, although Chaplin believed he was born at East Street, Walworth, in South London. His parents had married four years previously, at which time Charles Sr. became the legal guardian of Hannah's illegitimate son, Sydney John Hill. At the time of his birth, Chaplin's parents were both music hall entertainers. Hannah, the daughter of a shoemaker, had a brief and unsuccessful career under the stage name Lily Harley, while Charles Sr., a butcher's son, was a popular singer. Although they never divorced, Chaplin's parents were estranged by around 1891. The following year, Hannah gave birth to a third son – George Wheeler Dryden – fathered by the music hall entertainer Leo Dryden. The child was taken by Dryden at six months old, and did not re-enter Chaplin's life for 30 years. Chaplin's childhood was fraught with poverty and hardship, making his eventual trajectory "the most dramatic of all the rags to riches stories ever told" according to his authorised biographer David Robinson. Chaplin's early years were spent with his mother and brother Sydney in the London district of Kennington; Hannah had no means of income, other than occasional nursing and dressmaking, and Chaplin Sr. provided no financial support. As the situation deteriorated, Chaplin was sent to Lambeth Workhouse when he was seven years old. The council housed him at the Central London District School for paupers, which Chaplin remembered as "a forlorn existence". He was briefly reunited with his mother 18 months later, before Hannah was forced to readmit her family to the workhouse in July 1898. The boys were promptly sent to Norwood Schools, another institution for destitute children. In September 1898, Hannah was committed to Cane Hill mental asylum – she had developed a psychosis seemingly brought on by an infection of syphilis and malnutrition. For the two months she was there, Chaplin and his brother Sydney were sent to live with their father, whom the young boys scarcely knew. Charles Sr. was by then a severe alcoholic, and life there was bad enough to provoke a visit from the National Society for the Prevention of Cruelty to Children. Chaplin's father died two years later, at 38 years old, from cirrhosis of the liver. Hannah entered a period of remission but, in May 1903, became ill again. Chaplin, then 14, had the task of taking his mother to the infirmary, from where she was sent back to Cane Hill. He lived alone for several days, searching for food and occasionally sleeping rough, until Sydney – who had enrolled in the Navy two years earlier – returned. Hannah was released from the asylum eight months later, but in March 1905, her illness returned, this time permanently. "There was nothing we could do but accept poor mother's fate", Chaplin later wrote, and she remained in care until her death in 1928. Between his time in the poor schools and his mother succumbing to mental illness, Chaplin began to perform on stage. He later recalled making his first amateur appearance at the age of five years, when he took over from Hannah one night in Aldershot. This was an isolated occurrence, but by the time he was nine Chaplin had, with his mother's encouragement, grown interested in performing. He later wrote: "[she] imbued me with the feeling that I had some sort of talent". Through his father's connections, Chaplin became a member of the Eight Lancashire Lads clog-dancing troupe, with whom he toured English music halls throughout 1899 and 1900. Chaplin worked hard, and the act was popular with audiences, but he was not satisfied with dancing and wished to form a comedy act. In the years Chaplin was touring with the Eight Lancashire Lads, his mother ensured that he still attended school but, by age 13, he had abandoned education. He supported himself with a range of jobs, while nursing his ambition to become an actor. At 14, shortly after his mother's relapse, he registered with a theatrical agency in London's West End. The manager sensed potential in Chaplin, who was promptly given his first role as a newsboy in Harry Arthur Saintsbury's "Jim, a Romance of Cockayne". It opened in July 1903, but the show was unsuccessful and closed after two weeks. Chaplin's comic performance, however, was singled out for praise in many of the reviews. Saintsbury secured a role for Chaplin in Charles Frohman's production of "Sherlock Holmes", where he played Billy the pageboy in three nationwide tours. His performance was so well received that he was called to London to play the role alongside William Gillette, the original Holmes. "It was like tidings from heaven", Chaplin recalled. At 16 years old, Chaplin starred in the play's West End production at the Duke of York's Theatre from October to December 1905. He completed one final tour of "Sherlock Holmes" in early 1906, before leaving the play after more than two-and-a-half years. Chaplin soon found work with a new company, and went on tour with his brother – who was also pursuing an acting career – in a comedy sketch called "Repairs". In May 1906, Chaplin joined the juvenile act Casey's Circus, where he developed popular burlesque pieces and was soon the star of the show. By the time the act finished touring in July 1907, the 18-year-old had become an accomplished comedic performer. He struggled to find more work, however, and a brief attempt at a solo act was a failure. Meanwhile, Sydney Chaplin had joined Fred Karno's prestigious comedy company in 1906 and, by 1908, he was one of their key performers. In February, he managed to secure a two-week trial for his younger brother. Karno was initially wary, and considered Chaplin a "pale, puny, sullen-looking youngster" who "looked much too shy to do any good in the theatre." However, the teenager made an impact on his first night at the London Coliseum and he was quickly signed to a contract. Chaplin began by playing a series of minor parts, eventually progressing to starring roles in 1909. In April 1910, he was given the lead in a new sketch, "Jimmy the Fearless". It was a big success, and Chaplin received considerable press attention. Karno selected his new star to join the section of the company, one that also included Stan Laurel, that toured North America's vaudeville circuit. The young comedian headed the show and impressed reviewers, being described as "one of the best pantomime artists ever seen here". His most successful role was a drunk called the "Inebriate Swell", which drew him significant recognition. The tour lasted 21 months, and the troupe returned to England in June 1912. Chaplin recalled that he "had a disquieting feeling of sinking back into a depressing commonplaceness" and was, therefore, delighted when a new tour began in October. Six months into the second American tour, Chaplin was invited to join the New York Motion Picture Company. A representative who had seen his performances thought he could replace Fred Mace, a star of their Keystone Studios who intended to leave. Chaplin thought the Keystone comedies "a crude mélange of rough and rumble", but liked the idea of working in films and rationalised: "Besides, it would mean a new life." He met with the company and signed a $150-per-week contract in September 1913. Chaplin arrived in Los Angeles in early December, and began working for the Keystone studio on 5 January 1914. Chaplin's boss was Mack Sennett, who initially expressed concern that the 24-year-old looked too young. He was not used in a picture until late January, during which time Chaplin attempted to learn the processes of filmmaking. The one-reeler "Making a Living" marked his film acting debut and was released on 2 February 1914. Chaplin strongly disliked the picture, but one review picked him out as "a comedian of the first water". For his second appearance in front of the camera, Chaplin selected the costume with which he became identified. He described the process in his autobiography: The film was "Mabel's Strange Predicament", but "the Tramp" character, as it became known, debuted to audiences in "Kid Auto Races at Venice" – shot later than "Mabel's Strange Predicament" but released two days earlier on 7 February 1914. Chaplin adopted the character as his screen persona and attempted to make suggestions for the films he appeared in. These ideas were dismissed by his directors. During the filming of his eleventh picture, "Mabel at the Wheel", he clashed with director Mabel Normand and was almost released from his contract. Sennett kept him on, however, when he received orders from exhibitors for more Chaplin films. Sennett also allowed Chaplin to direct his next film himself after Chaplin promised to pay $1,500 ($ in dollars) if the film was unsuccessful. "Caught in the Rain", issued 4 May 1914, was Chaplin's directorial debut and was highly successful. Thereafter he directed almost every short film in which he appeared for Keystone, at the rate of approximately one per week, a period which he later remembered as the most exciting time of his career. Chaplin's films introduced a slower form of comedy than the typical Keystone farce, and he developed a large fan base. In November 1914, he had a supporting role in the first feature length comedy film, "Tillie's Punctured Romance", directed by Sennett and starring Marie Dressler, which was a commercial success and increased his popularity. When Chaplin's contract came up for renewal at the end of the year, he asked for $1,000 a week ($ in dollars) – an amount Sennett refused as too large. The Essanay Film Manufacturing Company of Chicago sent Chaplin an offer of $1,250 a week with a signing bonus of $10,000. He joined the studio in late December 1914, where he began forming a stock company of regular players, actors he worked with again and again, including Leo White, Bud Jamison, Paddy McGuire and Billy Armstrong. He soon recruited a leading lady – Edna Purviance, whom Chaplin met in a café and hired on account of her beauty. She went on to appear in 35 films with Chaplin over eight years; the pair also formed a romantic relationship that lasted into 1917. Chaplin asserted a high level of control over his pictures and started to put more time and care into each film. There was a month-long interval between the release of his second production, "A Night Out", and his third, "The Champion". The final seven of Chaplin's 14 Essanay films were all produced at this slower pace. Chaplin also began to alter his screen persona, which had attracted some criticism at Keystone for its "mean, crude, and brutish" nature. The character became more gentle and romantic; "The Tramp" (April 1915) was considered a particular turning point in his development. The use of pathos was developed further with "The Bank", in which Chaplin created a sad ending. Robinson notes that this was an innovation in comedy films, and marked the time when serious critics began to appreciate Chaplin's work. At Essanay, writes film scholar Simon Louvish, Chaplin "found the themes and the settings that would define the Tramp's world." During 1915, Chaplin became a cultural phenomenon. Shops were stocked with Chaplin merchandise, he was featured in cartoons and comic strips, and several songs were written about him. In July, a journalist for "Motion Picture Magazine" wrote that "Chaplinitis" had spread across America. As his fame grew worldwide, he became the film industry's first international star. When the Essanay contract ended in December 1915, Chaplin – fully aware of his popularity – requested a $150,000 signing bonus from his next studio. He received several offers, including Universal, Fox, and Vitagraph, the best of which came from the Mutual Film Corporation at $10,000 a week. A contract was negotiated with Mutual that amounted to $670,000 a year ($ million today), which Robinson says made Chaplin – at 26 years old – one of the highest paid people in the world. The high salary shocked the public and was widely reported in the press. John R. Freuler, the studio president, explained: "We can afford to pay Mr. Chaplin this large sum annually because the public wants Chaplin and will pay for him." Mutual gave Chaplin his own Los Angeles studio to work in, which opened in March 1916. He added two key members to his stock company, Albert Austin and Eric Campbell, and produced a series of elaborate two-reelers: "The Floorwalker", "The Fireman", "The Vagabond", "One A.M.", and "The Count". For "The Pawnshop", he recruited the actor Henry Bergman, who was to work with Chaplin for 30 years. "Behind the Screen" and "The Rink" completed Chaplin's releases for 1916. The Mutual contract stipulated that he release a two-reel film every four weeks, which he had managed to achieve. With the new year, however, Chaplin began to demand more time. He made only four more films for Mutual over the first ten months of 1917: "Easy Street", "The Cure", "The Immigrant", and "The Adventurer". With their careful construction, these films are considered by Chaplin scholars to be among his finest work. Later in life, Chaplin referred to his Mutual years as the happiest period of his career. However, Chaplin also felt that those films became increasingly formulaic over the period of the contract and he was increasingly dissatisfied with the working conditions encouraging that. Chaplin was attacked in the British media for not fighting in the First World War. He defended himself, claiming that he would fight for Britain if called and had registered for the American draft, but he was not summoned by either country. Despite this criticism Chaplin was a favourite with the troops, and his popularity continued to grow worldwide. "Harper's Weekly" reported that the name of Charlie Chaplin was "a part of the common language of almost every country", and that the Tramp image was "universally familiar". In 1917, professional Chaplin imitators were so widespread that he took legal action, and it was reported that nine out of ten men who attended costume parties, did so dressed as the Tramp. The same year, a study by the Boston Society for Psychical Research concluded that Chaplin was "an American obsession". The actress Minnie Maddern Fiske wrote that "a constantly increasing body of cultured, artistic people are beginning to regard the young English buffoon, Charles Chaplin, as an extraordinary artist, as well as a comic genius". In January 1918, Chaplin was visited by leading British singer and comedian Harry Lauder, and the two acted in a short film together. Mutual was patient with Chaplin's decreased rate of output, and the contract ended amicably. With his aforementioned concern about the declining quality of his films because of contract scheduling stipulations, Chaplin's primary concern in finding a new distributor was independence; Sydney Chaplin, then his business manager, told the press, "Charlie [must] be allowed all the time he needs and all the money for producing [films] the way he wants ... It is quality, not quantity, we are after." In June 1917, Chaplin signed to complete eight films for First National Exhibitors' Circuit in return for $1 million ($ million today). He chose to build his own studio, situated on five acres of land off Sunset Boulevard, with production facilities of the highest order. It was completed in January 1918, and Chaplin was given freedom over the making of his pictures. "A Dog's Life", released April 1918, was the first film under the new contract. In it, Chaplin demonstrated his increasing concern with story construction and his treatment of the Tramp as "a sort of Pierrot". The film was described by Louis Delluc as "cinema's first total work of art". Chaplin then embarked on the Third Liberty Bond campaign, touring the United States for one month to raise money for the Allies of the First World War. He also produced a short propaganda film at his own expense, donated to the government for fund-raising, called "The Bond". Chaplin's next release was war-based, placing the Tramp in the trenches for "Shoulder Arms". Associates warned him against making a comedy about the war but, as he later recalled: "Dangerous or not, the idea excited me." He spent four months filming the 45-minute-long picture, which was released in October 1918 with great success. After the release of "Shoulder Arms", Chaplin requested more money from First National, which was refused. Frustrated with their lack of concern for quality, and worried about rumours of a possible merger between the company and Famous Players-Lasky, Chaplin joined forces with Douglas Fairbanks, Mary Pickford, and D. W. Griffith to form a new distribution company – United Artists, established in January 1919. The arrangement was revolutionary in the film industry, as it enabled the four partners – all creative artists – to personally fund their pictures and have complete control. Chaplin was eager to start with the new company and offered to buy out his contract with First National. They refused and insisted that he complete the final six films owed. Before the creation of United Artists, Chaplin married for the first time. The 16-year-old actress Mildred Harris had revealed that she was pregnant with his child, and in September 1918, he married her quietly in Los Angeles to avoid controversy. Soon after, the pregnancy was found to be false. Chaplin was unhappy with the union and, feeling that marriage stunted his creativity, struggled over the production of his film "Sunnyside". Harris was by then legitimately pregnant, and on 7 July 1919, gave birth to a son. Norman Spencer Chaplin was born malformed and died three days later. The marriage ended in April 1920, with Chaplin explaining in his autobiography that they were "irreconcilably mismated". Losing the child, plus his own childhood experiences, are thought to have influenced Chaplin's next film, which turned the Tramp into the caretaker of a young boy. For this new venture, Chaplin also wished to do more than comedy and, according to Louvish, "make his mark on a changed world." Filming on "The Kid" began in August 1919, with four-year-old Jackie Coogan his co-star. "The Kid" was in production for nine months until May 1920 and, at 68 minutes, it was Chaplin's longest picture to date. Dealing with issues of poverty and parent–child separation, "The Kid" was one of the earliest films to combine comedy and drama. It was released in January 1921 with instant success, and, by 1924, had been screened in over 50 countries. Chaplin spent five months on his next film, the two-reeler "The Idle Class". Following its September 1921 release, he chose to return to England for the first time in almost a decade. He then worked to fulfil his First National contract, releasing "Pay Day" in February 1922. "The Pilgrim" – his final short film – was delayed by distribution disagreements with the studio, and released a year later. Having fulfilled his First National contract, Chaplin was free to make his first picture as an independent producer. In November 1922, he began filming "A Woman of Paris", a romantic drama about ill-fated lovers. Chaplin intended it to be a star-making vehicle for Edna Purviance, and did not appear in the picture himself other than in a brief, uncredited cameo. He wished the film to have a realistic feel and directed his cast to give restrained performances. In real life, he explained, "men and women try to hide their emotions rather than seek to express them". "A Woman of Paris" premiered in September 1923 and was acclaimed for its innovative, subtle approach. The public, however, seemed to have little interest in a Chaplin film without Chaplin, and it was a box office disappointment. The filmmaker was hurt by this failure – he had long wanted to produce a dramatic film and was proud of the result – and soon withdrew "A Woman of Paris" from circulation. Chaplin returned to comedy for his next project. Setting his standards high, he told himself "This next film must be an epic! The Greatest!" Inspired by a photograph of the 1898 Klondike Gold Rush, and later the story of the Donner Party of 1846–1847, he made what Geoffrey Macnab calls "an epic comedy out of grim subject matter." In "The Gold Rush", the Tramp is a lonely prospector fighting adversity and looking for love. With Georgia Hale as his leading lady, Chaplin began filming the picture in February 1924. Its elaborate production, costing almost $1 million, included location shooting in the Truckee mountains in Nevada with 600 extras, extravagant sets, and special effects. The last scene was shot in May 1925 after 15 months of filming. Chaplin felt "The Gold Rush" was the best film he had made. It opened in August 1925 and became one of the highest-grossing films of the silent era with a U.S. box-office of $5 million. The comedy contains some of Chaplin's most famous sequences, such as the Tramp eating his shoe and the "Dance of the Rolls". Macnab has called it "the quintessential Chaplin film". Chaplin stated at its release, "This is the picture that I want to be remembered by". While making "The Gold Rush", Chaplin married for the second time. Mirroring the circumstances of his first union, Lita Grey was a teenage actress, originally set to star in the film, whose surprise announcement of pregnancy forced Chaplin into marriage. She was 16 and he was 35, meaning Chaplin could have been charged with statutory rape under California law. He therefore arranged a discreet marriage in Mexico on 25 November 1924. They originally met during her childhood and she had previously appeared in his works "The Kid" and "The Idle Class". Their first son, Charles Spencer Chaplin, Jr., was born on 5 May 1925, followed by Sydney Earl Chaplin on 30 March 1926. It was an unhappy marriage, and Chaplin spent long hours at the studio to avoid seeing his wife. In November 1926, Grey took the children and left the family home. A bitter divorce followed, in which Grey's application – accusing Chaplin of infidelity, abuse, and of harbouring "perverted sexual desires" – was leaked to the press. Chaplin was reported to be in a state of nervous breakdown, as the story became headline news and groups formed across America calling for his films to be banned. Eager to end the case without further scandal, Chaplin's lawyers agreed to a cash settlement of $600,000the largest awarded by American courts at that time. His fan base was strong enough to survive the incident, and it was soon forgotten, but Chaplin was deeply affected by it. Before the divorce suit was filed, Chaplin had begun work on a new film, "The Circus". He built a story around the idea of walking a tightrope while besieged by monkeys, and turned the Tramp into the accidental star of a circus. Filming was suspended for 10 months while he dealt with the divorce scandal, and it was generally a trouble-ridden production. Finally completed in October 1927, "The Circus" was released in January 1928 to a positive reception. At the 1st Academy Awards, Chaplin was given a special trophy "For versatility and genius in acting, writing, directing and producing "The Circus"". Despite its success, he permanently associated the film with the stress of its production; Chaplin omitted "The Circus" from his autobiography, and struggled to work on it when he recorded the score in his later years. By the time "The Circus" was released, Hollywood had witnessed the introduction of sound films. Chaplin was cynical about this new medium and the technical shortcomings it presented, believing that "talkies" lacked the artistry of silent films. He was also hesitant to change the formula that had brought him such success, and feared that giving the Tramp a voice would limit his international appeal. He, therefore, rejected the new Hollywood craze and began work on a new silent film. Chaplin was nonetheless anxious about this decision and remained so throughout the film's production. When filming began at the end of 1928, Chaplin had been working on the story for almost a year. "City Lights" followed the Tramp's love for a blind flower girl (played by Virginia Cherrill) and his efforts to raise money for her sight-saving operation. It was a challenging production that lasted 21 months, with Chaplin later confessing that he "had worked himself into a neurotic state of wanting perfection". One advantage Chaplin found in sound technology was the opportunity to record a musical score for the film, which he composed himself. Chaplin finished editing "City Lights" in December 1930, by which time silent films were an anachronism. A preview before an unsuspecting public audience was not a success, but a showing for the press produced positive reviews. One journalist wrote, "Nobody in the world but Charlie Chaplin could have done it. He is the only person that has that peculiar something called 'audience appeal' in sufficient quality to defy the popular penchant for movies that talk." Given its general release in January 1931, "City Lights" proved to be a popular and financial success – eventually grossing over $3 million. The British Film Institute cites it as Chaplin's finest accomplishment, and the critic James Agee hails the closing scene as "the greatest piece of acting and the highest moment in movies". "City Lights" became Chaplin's personal favourite of his films and remained so throughout his life. "City Lights" had been a success, but Chaplin was unsure if he could make another picture without dialogue. He remained convinced that sound would not work in his films, but was also "obsessed by a depressing fear of being old-fashioned." In this state of uncertainty, early in 1931, the comedian decided to take a holiday and ended up travelling for 16 months. He spent months travelling Western Europe, including extended stays in France and Switzerland, and spontaneously decided to visit Japan. The day after he arrived in Japan, Prime Minister Inukai Tsuyoshi was assassinated by ultra-nationalists in the May 15 Incident. The group's original plan had been to provoke a war with the United States by assassinating Chaplin at a welcome reception organised by the prime minister, but the plan had been foiled due to delayed public announcement of the event's date. In his autobiography, Chaplin recalled that on his return to Los Angeles, "I was confused and without plan, restless and conscious of an extreme loneliness". He briefly considered retiring and moving to China. Chaplin's loneliness was relieved when he met 21-year-old actress Paulette Goddard in July 1932, and the pair began a relationship. He was not ready to commit to a film, however, and focused on writing a serial about his travels (published in "Woman's Home Companion"). The trip had been a stimulating experience for Chaplin, including meetings with several prominent thinkers, and he became increasingly interested in world affairs. The state of labour in America troubled him, and he feared that capitalism and machinery in the workplace would increase unemployment levels. It was these concerns that stimulated Chaplin to develop his new film. "Modern Times" was announced by Chaplin as "a satire on certain phases of our industrial life." Featuring the Tramp and Goddard as they endure the Great Depression, it took ten and a half months to film. Chaplin intended to use spoken dialogue but changed his mind during rehearsals. Like its predecessor, "Modern Times" employed sound effects but almost no speaking. Chaplin's performance of a gibberish song did, however, give the Tramp a voice for the only time on film. After recording the music, Chaplin released "Modern Times" in February 1936. It was his first feature in 15 years to adopt political references and social realism, a factor that attracted considerable press coverage despite Chaplin's attempts to downplay the issue. The film earned less at the box-office than his previous features and received mixed reviews, as some viewers disliked the politicising. Today, "Modern Times" is seen by the British Film Institute as one of Chaplin's "great features," while David Robinson says it shows the filmmaker at "his unrivalled peak as a creator of visual comedy." Following the release of "Modern Times", Chaplin left with Goddard for a trip to the Far East. The couple had refused to comment on the nature of their relationship, and it was not known whether they were married or not. Some time later, Chaplin revealed that they married in Canton during this trip. By 1938, the couple had drifted apart, as both focused heavily on their work, although Goddard was again his leading lady in his next feature film, "The Great Dictator". She eventually divorced Chaplin in Mexico in 1942, citing incompatibility and separation for more than a year. The 1940s saw Chaplin face a series of controversies, both in his work and in his personal life, which changed his fortunes and severely affected his popularity in the United States. The first of these was his growing boldness in expressing his political beliefs. Deeply disturbed by the surge of militaristic nationalism in 1930s world politics, Chaplin found that he could not keep these issues out of his work. Parallels between himself and Adolf Hitler had been widely noted: the pair were born four days apart, both had risen from poverty to world prominence, and Hitler wore the same toothbrush moustache as Chaplin. It was this physical resemblance that supplied the plot for Chaplin's next film, "The Great Dictator", which directly satirised Hitler and attacked fascism. Chaplin spent two years developing the script, and began filming in September 1939 – six days after Britain declared war on Germany. He had submitted to using spoken dialogue, partly out of acceptance that he had no other choice, but also because he recognised it as a better method for delivering a political message. Making a comedy about Hitler was seen as highly controversial, but Chaplin's financial independence allowed him to take the risk. "I was determined to go ahead," he later wrote, "for Hitler must be laughed at." Chaplin replaced the Tramp (while wearing similar attire) with "A Jewish Barber", a reference to the Nazi party's belief that he was Jewish. In a dual performance, he also played the dictator "Adenoid Hynkel", who parodied Hitler. "The Great Dictator" spent a year in production and was released in October 1940. The film generated a vast amount of publicity, with a critic for "The New York Times" calling it "the most eagerly awaited picture of the year", and it was one of the biggest money-makers of the era. The ending was unpopular, however, and generated controversy. Chaplin concluded the film with a five-minute speech in which he abandoned his barber character, looked directly into the camera, and pleaded against war and fascism. Charles J. Maland has identified this overt preaching as triggering a decline in Chaplin's popularity, and writes, "Henceforth, no movie fan would ever be able to separate the dimension of politics from [his] star image". "The Great Dictator" received five Academy Award nominations, including Best Picture, Best Original Screenplay and Best Actor. In the mid-1940s, Chaplin was involved in a series of trials that occupied most of his time and significantly affected his public image. The troubles stemmed from his affair with an aspirant actress named Joan Barry, with whom he was involved intermittently between June 1941 and the autumn of 1942. Barry, who displayed obsessive behaviour and was twice arrested after they separated, reappeared the following year and announced that she was pregnant with Chaplin's child. As Chaplin denied the claim, Barry filed a paternity suit against him. The director of the Federal Bureau of Investigation (FBI), J. Edgar Hoover, who had long been suspicious of Chaplin's political leanings, used the opportunity to generate negative publicity about him. As part of a smear campaign to damage Chaplin's image, the FBI named him in four indictments related to the Barry case. Most serious of these was an alleged violation of the Mann Act, which prohibits the transportation of women across state boundaries for sexual purposes. The historian Otto Friedrich has called this an "absurd prosecution" of an "ancient statute", yet if Chaplin was found guilty, he faced 23 years in jail. Three charges lacked sufficient evidence to proceed to court, but the Mann Act trial began on 21 March 1944. Chaplin was acquitted two weeks later, on 4 April. The case was frequently headline news, with "Newsweek" calling it the "biggest public relations scandal since the Fatty Arbuckle murder trial in 1921." Barry's child, Carol Ann, was born in October 1943, and the paternity suit went to court in December 1944. After two arduous trials, in which the prosecuting lawyer accused him of "moral turpitude", Chaplin was declared to be the father. Evidence from blood tests which indicated otherwise were not admissible, and the judge ordered Chaplin to pay child support until Carol Ann turned 21. Media coverage of the paternity suit was influenced by the FBI, as information was fed to the prominent gossip columnist Hedda Hopper, and Chaplin was portrayed in an overwhelmingly critical light. The controversy surrounding Chaplin increased when, two weeks after the paternity suit was filed, it was announced that he had married his newest protégée, 18-year-old Oona O'Neill – daughter of the American playwright Eugene O'Neill. Chaplin, then 54, had been introduced to her by a film agent seven months earlier. In his autobiography, Chaplin described meeting O'Neill as "the happiest event of my life", and claimed to have found "perfect love". Chaplin's son, Charles Jr., reported that Oona "worshipped" his father. The couple remained married until Chaplin's death, and had eight children over 18 years: Geraldine Leigh (b. July 1944), Michael John (b. March 1946), Josephine Hannah (b. March 1949), Victoria (b. May 1951), Eugene Anthony (b. August 1953), Jane Cecil (b. May 1957), Annette Emily (b. December 1959), and Christopher James (b. July 1962). Chaplin claimed that the Barry trials had "crippled [his] creativeness", and it was some time before he began working again. In April 1946, he finally began filming a project that had been in development since 1942. "Monsieur Verdoux" was a black comedy, the story of a French bank clerk, Verdoux (Chaplin), who loses his job and begins marrying and murdering wealthy widows to support his family. Chaplin's inspiration for the project came from Orson Welles, who wanted him to star in a film about the French serial killer Henri Désiré Landru. Chaplin decided that the concept would "make a wonderful comedy", and paid Welles $5,000 for the idea. Chaplin again vocalised his political views in "Monsieur Verdoux", criticising capitalism and arguing that the world encourages mass killing through wars and weapons of mass destruction. Because of this, the film met with controversy when it was released in April 1947; Chaplin was booed at the premiere, and there were calls for a boycott. "Monsieur Verdoux" was the first Chaplin release that failed both critically and commercially in the United States. It was more successful abroad, and Chaplin's screenplay was nominated at the Academy Awards. He was proud of the film, writing in his autobiography, ""Monsieur Verdoux" is the cleverest and most brilliant film I have yet made." The negative reaction to "Monsieur Verdoux" was largely the result of changes in Chaplin's public image. Along with damage of the Joan Barry scandal, he was publicly accused of being a communist. His political activity had heightened during World War II, when he campaigned for the opening of a Second Front to help the Soviet Union and supported various Soviet–American friendship groups. He was also friendly with several suspected communists, and attended functions given by Soviet diplomats in Los Angeles. In the political climate of 1940s America, such activities meant Chaplin was considered, as Larcher writes, "dangerously progressive and amoral." The FBI wanted him out of the country, and launched an official investigation in early 1947. Chaplin denied being a communist, instead calling himself a "peacemonger", but felt the government's effort to suppress the ideology was an unacceptable infringement of civil liberties. Unwilling to be quiet about the issue, he openly protested against the trials of Communist Party members and the activities of the House Un-American Activities Committee. Chaplin received a subpoena to appear before HUAC but was not called to testify. As his activities were widely reported in the press, and Cold War fears grew, questions were raised over his failure to take American citizenship. Calls were made for him to be deported; in one extreme and widely published example, Representative John E. Rankin, who helped establish HUAC, told Congress in June 1947: "[Chaplin's] very life in Hollywood is detrimental to the moral fabric of America. [If he is deported] ... his loathsome pictures can be kept from before the eyes of the American youth. He should be deported and gotten rid of at once." Although Chaplin remained politically active in the years following the failure of "Monsieur Verdoux", his next film, about a forgotten music hall comedian and a young ballerina in Edwardian London, was devoid of political themes. "Limelight" was heavily autobiographical, alluding not only to Chaplin's childhood and the lives of his parents, but also to his loss of popularity in the United States. The cast included various members of his family, including his five oldest children and his half-brother, Wheeler Dryden. Filming began in November 1951, by which time Chaplin had spent three years working on the story. He aimed for a more serious tone than any of his previous films, regularly using the word "melancholy" when explaining his plans to his co-star Claire Bloom. "Limelight" featured a cameo appearance from Buster Keaton, whom Chaplin cast as his stage partner in a pantomime scene. This marked the only time the comedians worked together. Chaplin decided to hold the world premiere of "Limelight" in London, since it was the setting of the film. As he left Los Angeles, he expressed a premonition that he would not be returning. At New York, he boarded the with his family on 18 September 1952. The next day, United States Attorney General James P. McGranery revoked Chaplin's re-entry permit and stated that he would have to submit to an interview concerning his political views and moral behaviour to re-enter the US. Although McGranery told the press that he had "a pretty good case against Chaplin", Maland has concluded, on the basis of the FBI files that were released in the 1980s, that the US government had no real evidence to prevent Chaplin's re-entry. It is likely that he would have gained entry if he had applied for it. However, when Chaplin received a cablegram informing him of the news, he privately decided to cut his ties with the United States: Because all of his property remained in America, Chaplin refrained from saying anything negative about the incident to the press. The scandal attracted vast attention, but Chaplin and his film were warmly received in Europe. In America, the hostility towards him continued, and, although it received some positive reviews, "Limelight" was subjected to a wide-scale boycott. Reflecting on this, Maland writes that Chaplin's fall, from an "unprecedented" level of popularity, "may be the most dramatic in the history of stardom in America". Chaplin did not attempt to return to the United States after his re-entry permit was revoked, and instead sent his wife to settle his affairs. The couple decided to settle in Switzerland and, in January 1953, the family moved into their permanent home: Manoir de Ban, a estate overlooking Lake Geneva in Corsier-sur-Vevey. Chaplin put his Beverly Hills house and studio up for sale in March, and surrendered his re-entry permit in April. The next year, his wife renounced her US citizenship and became a British citizen. Chaplin severed the last of his professional ties with the United States in 1955, when he sold the remainder of his stock in United Artists, which had been in financial difficulty since the early 1940s. Chaplin remained a controversial figure throughout the 1950s, especially after he was awarded the International Peace Prize by the communist-led World Peace Council, and after his meetings with Zhou Enlai and Nikita Khrushchev. He began developing his first European film, "A King in New York", in 1954. Casting himself as an exiled king who seeks asylum in the United States, Chaplin included several of his recent experiences in the screenplay. His son, Michael, was cast as a boy whose parents are targeted by the FBI, while Chaplin's character faces accusations of communism. The political satire parodied HUAC and attacked elements of 1950s culture – including consumerism, plastic surgery, and wide-screen cinema. In a review, the playwright John Osborne called it Chaplin's "most bitter" and "most openly personal" film. In a 1957 interview, when asked to clarify his political views, Chaplin stated "As for politics, I am an anarchist. I hate government and rules – and fetters ... People must be free." Chaplin founded a new production company, Attica, and used Shepperton Studios for the shooting. Filming in England proved a difficult experience, as he was used to his own Hollywood studio and familiar crew, and no longer had limitless production time. According to Robinson, this had an effect on the quality of the film. "A King in New York" was released in September 1957, and received mixed reviews. Chaplin banned American journalists from its Paris première and decided not to release the film in the United States. This severely limited its revenue, although it achieved moderate commercial success in Europe. "A King in New York" was not shown in America until 1973. In the last two decades of his career, Chaplin concentrated on re-editing and scoring his old films for re-release, along with securing their ownership and distribution rights. In an interview he granted in 1959, the year of his 70th birthday, Chaplin stated that there was still "room for the Little Man in the atomic age". The first of these re-releases was "The Chaplin Revue" (1959), which included new versions of "A Dog's Life", "Shoulder Arms", and "The Pilgrim". In America, the political atmosphere began to change and attention was once again directed to Chaplin's films instead of his views. In July 1962, "The New York Times" published an editorial stating that "we do not believe the Republic would be in danger if yesterday's unforgotten little tramp were allowed to amble down the gangplank of a steamer or plane in an American port". The same month, Chaplin was invested with the honorary degree of Doctor of Letters by the universities of Oxford and Durham. In November 1963, the Plaza Theater in New York started a year-long series of Chaplin's films, including "Monsieur Verdoux" and "Limelight", which gained excellent reviews from American critics. September 1964 saw the release of Chaplin's memoirs, "My Autobiography", which he had been working on since 1957. The 500-page book became a worldwide best-seller. It focused on his early years and personal life, and was criticised for lacking information on his film career. Shortly after the publication of his memoirs, Chaplin began work on "A Countess from Hong Kong" (1967), a romantic comedy based on a script he had written for Paulette Goddard in the 1930s. Set on an ocean liner, it starred Marlon Brando as an American ambassador and Sophia Loren as a stowaway found in his cabin. The film differed from Chaplin's earlier productions in several aspects. It was his first to use Technicolor and the widescreen format, while he concentrated on directing and appeared on-screen only in a cameo role as a seasick steward. He also signed a deal with Universal Pictures and appointed his assistant, Jerome Epstein, as the producer. Chaplin was paid $600,000 director's fee as well as a percentage of the gross receipts. "A Countess from Hong Kong" premiered in January 1967, to unfavourable reviews, and was a box-office failure. Chaplin was deeply hurt by the negative reaction to the film, which turned out to be his last. Chaplin suffered a series of minor strokes in the late 1960s, which marked the beginning of a slow decline in his health. Despite the setbacks, he was soon writing a new film script, "The Freak", a story of a winged girl found in South America, which he intended as a starring vehicle for his daughter, Victoria. His fragile health prevented the project from being realised. In the early 1970s, Chaplin concentrated on re-releasing his old films, including "The Kid" and "The Circus". In 1971, he was made a Commander of the National Order of the Legion of Honour at the Cannes Film Festival. The following year, he was honoured with a special award by the Venice Film Festival. In 1972, the Academy of Motion Picture Arts and Sciences offered Chaplin an Honorary Award, which Robinson sees as a sign that America "wanted to make amends". Chaplin was initially hesitant about accepting but decided to return to the US for the first time in 20 years. The visit attracted a large amount of press coverage and, at the Academy Awards gala, he was given a 12-minute standing ovation, the longest in the Academy's history. Visibly emotional, Chaplin accepted his award for "the incalculable effect he has had in making motion pictures the art form of this century". Although Chaplin still had plans for future film projects, by the mid-1970s he was very frail. He experienced several further strokes, which made it difficult for him to communicate, and he had to use a wheelchair. His final projects were compiling a pictorial autobiography, "My Life in Pictures" (1974) and scoring "A Woman of Paris" for re-release in 1976. He also appeared in a documentary about his life, "The Gentleman Tramp" (1975), directed by Richard Patterson. In the 1975 New Year Honours, Chaplin was awarded a knighthood by Queen Elizabeth II, though he was too weak to kneel and received the honour in his wheelchair. By October 1977, Chaplin's health had declined to the point that he needed constant care. In the early morning of 25 December 1977, Chaplin died at home after suffering a stroke in his sleep. He was 88 years old. The funeral, on 27 December, was a small and private Anglican ceremony, according to his wishes. Chaplin was interred in the Corsier-sur-Vevey cemetery. Among the film industry's tributes, director René Clair wrote, "He was a monument of the cinema, of all countries and all times ... the most beautiful gift the cinema made to us." Actor Bob Hope declared, "We were lucky to have lived in his time." On 1 March 1978, Chaplin's coffin was dug up and stolen from its grave by two unemployed immigrants, Roman Wardas, from Poland, and Gantcho Ganev, from Bulgaria. The body was held for ransom in an attempt to extort money from Oona Chaplin. The pair were caught in a large police operation in May, and Chaplin's coffin was found buried in a field in the nearby village of Noville. It was re-interred in the Corsier cemetery surrounded by reinforced concrete. Chaplin believed his first influence to be his mother, who entertained him as a child by sitting at the window and mimicking passers-by: "it was through watching her that I learned not only how to express emotions with my hands and face, but also how to observe and study people." Chaplin's early years in music hall allowed him to see stage comedians at work; he also attended the Christmas pantomimes at Drury Lane, where he studied the art of clowning through performers like Dan Leno. Chaplin's years with the Fred Karno company had a formative effect on him as an actor and filmmaker. Simon Louvish writes that the company was his "training ground", and it was here that Chaplin learned to vary the pace of his comedy. The concept of mixing pathos with slapstick was learnt from Karno, who also used elements of absurdity that became familiar in Chaplin's gags. From the film industry, Chaplin drew upon the work of the French comedian Max Linder, whose films he greatly admired. In developing the Tramp costume and persona, he was likely inspired by the American vaudeville scene, where tramp characters were common. Chaplin never spoke more than cursorily about his filmmaking methods, claiming such a thing would be tantamount to a magician spoiling his own illusion. Little was known about his working process throughout his lifetime, but research from film historians – particularly the findings of Kevin Brownlow and David Gill that were presented in the three-part documentary "Unknown Chaplin" (1983) – has since revealed his unique working method. Until he began making spoken dialogue films with "The Great Dictator", Chaplin never shot from a completed script. Many of his early films began with only a vague premise – for example "Charlie enters a health spa" or "Charlie works in a pawn shop." He then had sets constructed and worked with his stock company to improvise gags and "business" using them, almost always working the ideas out on film. As ideas were accepted and discarded, a narrative structure would emerge, frequently requiring Chaplin to reshoot an already-completed scene that might have otherwise contradicted the story. From "A Woman of Paris" onward Chaplin began the filming process with a prepared plot, but Robinson writes that every film up to "Modern Times" "went through many metamorphoses and permutations before the story took its final form." Producing films in this manner meant Chaplin took longer to complete his pictures than almost any other filmmaker at the time. If he was out of ideas, he often took a break from the shoot, which could last for days, while keeping the studio ready for when inspiration returned. Delaying the process further was Chaplin's rigorous perfectionism. According to his friend Ivor Montagu, "nothing but perfection would be right" for the filmmaker. Because he personally funded his films, Chaplin was at liberty to strive for this goal and shoot as many takes as he wished. The number was often excessive, for instance 53 takes for every finished take in "The Kid". For "The Immigrant", a 20-minute short, Chaplin shot 40,000 feet of film—enough for a feature-length. Describing his working method as "sheer perseverance to the point of madness", Chaplin would be completely consumed by the production of a picture. Robinson writes that even in Chaplin's later years, his work continued "to take precedence over everything and everyone else." The combination of story improvisation and relentless perfectionism – which resulted in days of effort and thousands of feet of film being wasted, all at enormous expense – often proved taxing for Chaplin who, in frustration, would lash out at his actors and crew. Chaplin exercised complete control over his pictures, to the extent that he would act out the other roles for his cast, expecting them to imitate him exactly. He personally edited all of his films, trawling through the large amounts of footage to create the exact picture he wanted. As a result of his complete independence, he was identified by the film historian Andrew Sarris as one of the first auteur filmmakers. Chaplin did receive help, notably from his long-time cinematographer Roland Totheroh, brother Sydney Chaplin, and various assistant directors such as Harry Crocker and Charles Reisner. While Chaplin's comedic style is broadly defined as slapstick, it is considered restrained and intelligent, with the film historian Philip Kemp describing his work as a mix of "deft, balletic physical comedy and thoughtful, situation-based gags". Chaplin diverged from conventional slapstick by slowing the pace and exhausting each scene of its comic potential, with more focus on developing the viewer's relationship to the characters. Unlike conventional slapstick comedies, Robinson states that the comic moments in Chaplin's films centre on the Tramp's attitude to the things happening to him: the humour does not come from the Tramp bumping into a tree, but from his lifting his hat to the tree in apology. Dan Kamin writes that Chaplin's "quirky mannerisms" and "serious demeanour in the midst of slapstick action" are other key aspects of his comedy, while the surreal transformation of objects and the employment of in-camera trickery are also common features. Chaplin's silent films typically follow the Tramp's efforts to survive in a hostile world. The character lives in poverty and is frequently treated badly, but remains kind and upbeat; defying his social position, he strives to be seen as a gentleman. As Chaplin said in 1925, "The whole point of the Little Fellow is that no matter how down on his ass he is, no matter how well the jackals succeed in tearing him apart, he's still a man of dignity." The Tramp defies authority figures and "gives as good as he gets", leading Robinson and Louvish to see him as a representative for the underprivileged – an "everyman turned heroic saviour". Hansmeyer notes that several of Chaplin's films end with "the homeless and lonely Tramp [walking] optimistically ... into the sunset ... to continue his journey". The infusion of pathos is a well-known aspect of Chaplin's work, and Larcher notes his reputation for "[inducing] laughter and tears". Sentimentality in his films comes from a variety of sources, with Louvish pinpointing "personal failure, society's strictures, economic disaster, and the elements." Chaplin sometimes drew on tragic events when creating his films, as in the case of "The Gold Rush" (1925), which was inspired by the fate of the Donner Party. Constance B. Kuriyama has identified serious underlying themes in the early comedies, such as greed ("The Gold Rush") and loss ("The Kid"). Chaplin also touched on controversial issues: immigration ("The Immigrant", 1917); illegitimacy ("The Kid", 1921); and drug use ("Easy Street", 1917). He often explored these topics ironically, making comedy out of suffering. Social commentary was a feature of Chaplin's films from early in his career, as he portrayed the underdog in a sympathetic light and highlighted the difficulties of the poor. Later, as he developed a keen interest in economics and felt obliged to publicise his views, Chaplin began incorporating overtly political messages into his films. "Modern Times" (1936) depicted factory workers in dismal conditions, "The Great Dictator" (1940) parodied Adolf Hitler and Benito Mussolini and ended in a speech against nationalism, "Monsieur Verdoux" (1947) criticised war and capitalism, and "A King in New York" (1957) attacked McCarthyism. Several of Chaplin's films incorporate autobiographical elements, and the psychologist Sigmund Freud believed that Chaplin "always plays only himself as he was in his dismal youth". "The Kid" is thought to reflect Chaplin's childhood trauma of being sent into an orphanage, the main characters in "Limelight" (1952) contain elements from the lives of his parents, and "A King in New York" references Chaplin's experiences of being shunned by the United States. Many of his sets, especially in street scenes, bear a strong similarity to Kennington, where he grew up. Stephen M. Weissman has argued that Chaplin's problematic relationship with his mentally ill mother was often reflected in his female characters and the Tramp's desire to save them. Regarding the structure of Chaplin's films, the scholar Gerald Mast sees them as consisting of sketches tied together by the same theme and setting, rather than having a tightly unified storyline. Visually, his films are simple and economic, with scenes portrayed as if set on a stage. His approach to filming was described by the art director Eugène Lourié: "Chaplin did not think in 'artistic' images when he was shooting. He believed that action is the main thing. The camera is there to photograph the actors". In his autobiography, Chaplin wrote, "Simplicity is best ... pompous effects slow up action, are boring and unpleasant ... The camera should not intrude." This approach has prompted criticism, since the 1940s, for being "old fashioned", while the film scholar Donald McCaffrey sees it as an indication that Chaplin never completely understood film as a medium. Kamin, however, comments that Chaplin's comedic talent would not be enough to remain funny on screen if he did not have an "ability to conceive and direct scenes specifically for the film medium". Chaplin developed a passion for music as a child and taught himself to play the piano, violin, and cello. He considered the musical accompaniment of a film to be important, and from "A Woman of Paris" onwards he took an increasing interest in this area. With the advent of sound technology, Chaplin began using a synchronised orchestral soundtrack – composed by himself – for "City Lights" (1931). He thereafter composed the scores for all of his films, and from the late 1950s to his death, he scored all of his silent features and some of his short films. As Chaplin was not a trained musician, he could not read sheet music and needed the help of professional composers, such as David Raksin, Raymond Rasch and Eric James, when creating his scores. Musical directors were employed to oversee the recording process, such as Alfred Newman for "City Lights". Although some critics have claimed that credit for his film music should be given to the composers who worked with him, Raksin – who worked with Chaplin on "Modern Times" – stressed Chaplin's creative position and active participation in the composing process. This process, which could take months, would start with Chaplin describing to the composer(s) exactly what he wanted and singing or playing tunes he had improvised on the piano. These tunes were then developed further in a close collaboration among the composer(s) and Chaplin. According to film historian Jeffrey Vance, "although he relied upon associates to arrange varied and complex instrumentation, the musical imperative is his, and not a note in a Chaplin musical score was placed there without his assent." Chaplin's compositions produced three popular songs. "Smile", composed originally for "Modern Times" (1936) and later set to lyrics by John Turner and Geoffrey Parsons, was a hit for Nat King Cole in 1954. For "Limelight", Chaplin composed "Terry's Theme", which was popularised by Jimmy Young as "Eternally" (1952). Finally, "This Is My Song", performed by Petula Clark for "A Countess from Hong Kong" (1967), reached number one on the UK and other European charts. Chaplin also received his only competitive Oscar for his composition work, as the "Limelight" theme won an Academy Award for Best Original Score in 1973 following the film's re-release. In 1998, the film critic Andrew Sarris called Chaplin "arguably the single most important artist produced by the cinema, certainly its most extraordinary performer and probably still its most universal icon". He is described by the British Film Institute as "a towering figure in world culture", and was included in "Time" magazine's list of the "" for the "laughter [he brought] to millions" and because he "more or less invented global recognizability and helped turn an industry into an art". The image of the Tramp has become a part of cultural history; according to Simon Louvish, the character is recognisable to people who have never seen a Chaplin film, and in places where his films are never shown. The critic Leonard Maltin has written of the "unique" and "indelible" nature of the Tramp, and argued that no other comedian matched his "worldwide impact". Praising the character, Richard Schickel suggests that Chaplin's films with the Tramp contain the most "eloquent, richly comedic expressions of the human spirit" in movie history. Memorabilia connected to the character still fetches large sums in auctions: in 2006 a bowler hat and a bamboo cane that were part of the Tramp's costume were bought for $140,000 in a Los Angeles auction. As a filmmaker, Chaplin is considered a pioneer and one of the most influential figures of the early twentieth century. He is often credited as one of the medium's first artists. Film historian Mark Cousins has written that Chaplin "changed not only the imagery of cinema, but also its sociology and grammar" and claims that Chaplin was as important to the development of comedy as a genre as D.W. Griffith was to drama. He was the first to popularise feature-length comedy and to slow down the pace of action, adding pathos and subtlety to it. Although his work is mostly classified as slapstick, Chaplin's drama "A Woman of Paris" (1923) was a major influence on Ernst Lubitsch's film "The Marriage Circle" (1924) and thus played a part in the development of "sophisticated comedy". According to David Robinson, Chaplin's innovations were "rapidly assimilated to become part of the common practice of film craft." Filmmakers who cited Chaplin as an influence include Federico Fellini (who called Chaplin "a sort of Adam, from whom we are all descended"), Jacques Tati ("Without him I would never have made a film"), René Clair ("He inspired practically every filmmaker"), Michael Powell, Billy Wilder, Vittorio De Sica, and Richard Attenborough. Russian filmmaker Andrei Tarkovsky praised Chaplin as "the only person to have gone down into cinematic history without any shadow of a doubt. The films he left behind can never grow old." Chaplin also strongly influenced the work of later comedians. Marcel Marceau said he was inspired to become a mime artist after watching Chaplin, while the actor Raj Kapoor based his screen persona on the Tramp. Mark Cousins has also detected Chaplin's comedic style in the French character Monsieur Hulot and the Italian character Totò. In other fields, Chaplin helped inspire the cartoon characters Felix the Cat and Mickey Mouse, and was an influence on the Dada art movement. As one of the founding members of United Artists, Chaplin also had a role in the development of the film industry. Gerald Mast has written that although UA never became a major company like MGM or Paramount Pictures, the idea that directors could produce their own films was "years ahead of its time". In the 21st century, several of Chaplin's films are still regarded as classics and among the greatest ever made. The 2012 "Sight & Sound" poll, which compiles "top ten" ballots from film critics and directors to determine each group's most acclaimed films, saw "City Lights" rank among the critics' top 50, "Modern Times" inside the top 100, and "The Great Dictator" and "The Gold Rush" placed in the top 250. The top 100 films as voted on by directors included "Modern Times" at number 22, "City Lights" at number 30, and "The Gold Rush" at number 91. Every one of Chaplin's features received a vote. In 2007, the American Film Institute named "City Lights" the 11th greatest American film of all time, while "The Gold Rush" and "Modern Times" again ranked in the top 100. Books about Chaplin continue to be published regularly, and he is a popular subject for media scholars and film archivists. Many of Chaplin's film have had a DVD and Blu-ray release. Chaplin's legacy is managed on behalf of his children by the Chaplin office, located in Paris. The office represents Association Chaplin, founded by some of his children "to protect the name, image and moral rights" to his body of work, Roy Export SAS, which owns the copyright to most of his films made after 1918, and Bubbles Incorporated S.A., which owns the copyrights to his image and name. Their central archive is held at the archives of Montreux, Switzerland and scanned versions of its contents, including 83,630 images, 118 scripts, 976 manuscripts, 7,756 letters, and thousands of other documents, are available for research purposes at the Chaplin Research Centre at the Cineteca di Bologna. The photographic archive, which includes approximately 10,000 photographs from Chaplin's life and career, is kept at the Musée de l'Elysée in Lausanne, Switzerland. The British Film Institute has also established the Charles Chaplin Research Foundation, and the first international Charles Chaplin Conference was held in London in July 2005. Elements for many of Chaplin's films are held by the Academy Film Archive as part of the Roy Export Chaplin Collection. Chaplin's final home, Manoir de Ban in Corsier-sur-Vevey, Switzerland, has been converted into a museum named "Chaplin's World". It opened on 17 April 2016 after 15 years of development, and is described by Reuters as "an interactive museum showcasing the life and works of Charlie Chaplin". On the 128th anniversary of his birth, a record-setting 662 people dressed as the Tramp in an event organised by the museum. Previously, the Museum of the Moving Image in London held a permanent display on Chaplin, and hosted a dedicated exhibition to his life and career in 1988. The London Film Museum hosted an exhibition called "Charlie Chaplin – The Great Londoner", from 2010 until 2013. In London, a statue of Chaplin as the Tramp, sculpted by John Doubleday and unveiled in 1981, is located in Leicester Square. The city also includes a road named after him in central London, "Charlie Chaplin Walk", which is the location of the BFI IMAX. There are nine blue plaques memorialising Chaplin in London, Hampshire, and Yorkshire. The Swiss town of Vevey named a park in his honour in 1980 and erected a statue there in 1982. In 2011, two large murals depicting Chaplin on two 14-storey buildings were also unveiled in Vevey. Chaplin has also been honoured by the Irish town of Waterville, where he spent several summers with his family in the 1960s. A statue was erected in 1998; since 2011, the town has been host to the annual Charlie Chaplin Comedy Film Festival, which was founded to celebrate Chaplin's legacy and to showcase new comic talent. In other tributes, a minor planet, 3623 Chaplin – discovered by Soviet astronomer Lyudmila Karachkina in 1981 – is named after Chaplin. Throughout the 1980s, the Tramp image was used by IBM to advertise their personal computers. Chaplin's 100th birthday anniversary in 1989 was marked with several events around the world, and on 15 April 2011, a day before his 122nd birthday, Google celebrated him with a special Google Doodle video on its global and other country-wide homepages. Many countries, spanning six continents, have honoured Chaplin with a postal stamp. Chaplin is the subject of a biographical film, "Chaplin" (1992) directed by Richard Attenborough, and starring Robert Downey Jr. in the title role and Geraldine Chaplin playing Hannah Chaplin. He is also a character in the historical drama film "The Cat's Meow" (2001), played by Eddie Izzard, and in the made-for-television movie "The Scarlett O'Hara War" (1980), played by Clive Revill. A television series about Chaplin's childhood, "Young Charlie Chaplin", ran on PBS in 1989, and was nominated for an Emmy Award for Outstanding Children's Program. The French film "The Price of Fame" (2014) is a fictionalised account of the robbery of Chaplin's grave. Chaplin's life has also been the subject of several stage productions. Two musicals, "Little Tramp" and "Chaplin", were produced in the early 1990s. In 2006, Thomas Meehan and Christopher Curtis created another musical, "Limelight: The Story of Charlie Chaplin", which was first performed at the La Jolla Playhouse in San Diego in 2010. It was adapted for Broadway two years later, re-titled "Chaplin – A Musical". Chaplin was portrayed by Robert McClure in both productions. In 2013, two plays about Chaplin premiered in Finland: "Chaplin" at the Svenska Teatern, and "Kulkuri" ("The Tramp") at the Tampere Workers' Theatre. Chaplin has also been characterised in literary fiction. He is the protagonist of Robert Coover's short story "Charlie in the House of Rue" (1980; reprinted in Coover's 1987 collection "A Night at the Movies"), and of Glen David Gold's "Sunnyside" (2009), a historical novel set in the First World War period. A day in Chaplin's life in 1909 is dramatised in the chapter titled "Modern Times" in Alan Moore's "Jerusalem" (2016), a novel set in the author's home town of Northampton, England. Chaplin received many awards and honours, especially later in life. In the 1975 New Year Honours, he was appointed a Knight Commander of the Most Excellent Order of the British Empire (KBE). He was also awarded honorary Doctor of Letters degrees by the University of Oxford and the University of Durham in 1962. In 1965, he and Ingmar Bergman were joint winners of the Erasmus Prize and, in 1971, he was appointed a Commander of the National Order of the Legion of Honour by the French government. From the film industry, Chaplin received a special Golden Lion at the Venice Film Festival in 1972, and a Lifetime Achievement Award from the Lincoln Center Film Society the same year. The latter has since been presented annually to filmmakers as The Chaplin Award. Chaplin was given a star on the Hollywood Walk of Fame in 1972, having been previously excluded because of his political beliefs. Chaplin received three Academy Awards: an Honorary Award for "versatility and genius in acting, writing, directing, and producing "The Circus"" in 1929, a second Honorary Award for "the incalculable effect he has had in making motion pictures the art form of this century" in 1972, and a Best Score award in 1973 for "Limelight" (shared with Ray Rasch and Larry Russell). He was further nominated in the Best Actor, Best Original Screenplay, and Best Picture (as producer) categories for "The Great Dictator", and received another Best Original Screenplay nomination for "Monsieur Verdoux". In 1976, Chaplin was made a Fellow of the British Academy of Film and Television Arts (BAFTA). Six of Chaplin's films have been selected for preservation in the National Film Registry by the United States Library of Congress: "The Immigrant" (1917), "The Kid" (1921), "The Gold Rush" (1925), "City Lights" (1931), "Modern Times" (1936), and "The Great Dictator" (1940). Directed features:
https://en.wikipedia.org/wiki?curid=5142
The World Factbook The World Factbook, also known as the CIA World Factbook, is a reference resource produced by the Central Intelligence Agency (CIA) with almanac-style information about the countries of the world. The official print version is available from the Government Printing Office. Other companies—such as Skyhorse Publishing—also print a paper edition. "The Factbook" is available in the form of a website that is partially updated every week. It is also available for download for use off-line. It provides a two- to three-page summary of the demographics, geography, communications, government, economy, and military of each of 267 international entities including U.S.-recognized countries, dependencies, and other areas in the world. "The World Factbook" is prepared by the CIA for the use of U.S. government officials, and its style, format, coverage, and content are primarily designed to meet their requirements. However, it is frequently used as a resource for academic research papers and news articles. As a work of the U.S. government, it is in the public domain in the United States. In researching the "Factbook", the CIA uses the sources listed below. Other public and private sources are also consulted. Because the "Factbook" is in the public domain, people are free under United States law to redistribute it or parts of it in any way that they like, without permission of the CIA. However, the CIA requests that it be cited when the "Factbook" is used. Copying the official seal of the CIA without permission is prohibited by U.S. federal law—specifically, the Central Intelligence Agency Act of 1949 (). Before November 2001 "The World Factbook" website was updated yearly; from 2004 to 2010 it was updated every two weeks; since 2010 it has been updated weekly. Generally, information currently available as of January 1 of the current year is used in preparing the "Factbook". The first, classified, edition of "Factbook" was published in August 1962, and the first unclassified version in June 1971. "The World Factbook" was first available to the public in print in 1975. In 2008 the CIA discontinued printing the "Factbook" themselves, instead turning printing responsibilities over to the Government Printing Office. This happened due to a CIA decision to "focus Factbook resources" on the online edition. The "Factbook" has been on the World Wide Web since October 1994. The web version receives an average of 6 million visits per month; it can also be downloaded. The official printed version is sold by the Government Printing Office and National Technical Information Service. In past years, the "Factbook" was available on CD-ROM, microfiche, magnetic tape, and floppy disk. Many Internet sites use information and images from the CIA "World Factbook". Several publishers, including Grand River Books, Potomac Books (formerly known as Brassey's Inc.), and Skyhorse Publishing have re-published the "Factbook" in recent years. As of July 2011, "The World Factbook" comprises 267 entities, which can be divided into the following categories: ) editions with the decision to drop the entries for French Guiana, Guadeloupe, Martinique, Mayotte, and Reunion. They were dropped because besides being overseas departments, they were now overseas regions, and an integral part of France. The Factbook is full of usually minor errors, inaccuracies, and out-of-date information, which are often repeated elsewhere due to the "Factbook"s widespread use as a reference. For example, Albania was until recently, described in the "Factbook" as 70% Muslim, 20% Eastern Orthodox, and 10% Roman Catholic, which was based on a survey conducted in 1939, before World War II; numerous surveys conducted since the fall of the Communist regime since 1990 have given quite different figures. Another example is Singapore, which the "Factbook" states has a total fertility rate of 0.78 children per woman, despite figures in Statistics Singapore which state that the rate has been about 1.2–1.3 children per woman for at least the past several years, and it is unclear when, or even whether, it ever dropped as low as 0.78. This low and inaccurate value then gets cited in news articles which state that Singapore has the world's lowest fertility, or at least use the figure for its shock value. Another serious problem is that the Factbook never cites its sources, making verification of the information it presents difficult if not impossible. In June 2009, National Public Radio (NPR), relying on information obtained from the "CIA World Factbook", put the number of Israeli Jews living in settlements in the West Bank and Israeli-annexed East Jerusalem at 250,000. However, a better estimate, based on State Department and Israeli sources put the figure at about 500,000. NPR then issued a correction. Chuck Holmes, foreign editor for NPR Digital, said, "I'm surprised and displeased, and it makes me wonder what other information is out-of-date or incorrect in the CIA "World Factbook"." Scholars have acknowledged that some entries in the "Factbook" are out of date. Alternative publications
https://en.wikipedia.org/wiki?curid=5163
Country The term country refers to a political state or nation or its territory. It is often referred to as the land of an individual's birth, residence, or citizenship. A country may be an independent sovereign state or part of a larger state, as a non-sovereign or formerly sovereign political division, a physical territory with a government, or a geographic region associated with sets of previously independent or differently associated people with distinct political characteristics. It is not inherently sovereign. "Countries" can refer both to sovereign states and to other political entities, while other times it can refer only to states. For example, the "CIA World Factbook" uses the word in its "Country name" field to refer to "a wide variety of dependencies, areas of special sovereignty, uninhabited islands, and other entities in addition to the traditional countries or independent states". The largest country in the world is Russia, while the most populous is China. The newest country is South Sudan. The word "country" comes from Old French "contrée", which derives from Vulgar Latin ("terra") "contrata" ("(land) lying opposite"; "(land) spread before"), derived from "contra" ("against, opposite"). It most likely entered the English language after the Franco-Norman invasion during the 11th century. In English the word has increasingly become associated with political divisions, so that one sense, associated with the indefinite article – "a country" – through misuse and subsequent conflation is now a synonym for state, or a former sovereign state, in the sense of sovereign territory or "district, native land". Areas much smaller than a political state may be called by names such as the West Country in England, the Black Country (a heavily industrialized part of England), "Constable Country" (a part of East Anglia painted by John Constable), the "big country" (used in various contexts of the American West), "coal country" (used of parts of the US and elsewhere) and many other terms. The equivalent terms in French and other Romance languages ("pays" and variants) have not carried the process of being identified with political sovereign states as far as the English "country", instead derived from, pagus, which designated the territory controlled by a medieval count, a title originally granted by the Roman Church. In many European countries the words are used for sub-divisions of the national territory, as in the German Bundesländer, as well as a less formal term for a sovereign state. France has very many "pays" that are officially recognized at some level, and are either natural regions, like the Pays de Bray, or reflect old political or economic entities, like the Pays de la Loire. A version of "country" can be found in the modern French language as "contrée", based on the word "cuntrée" in Old French, that is used similarly to the word "pays" to define non-state regions, but can also be used to describe a political state in some particular cases. The modern Italian "contrada" is a word with its meaning varying locally, but usually meaning a ward or similar small division of a town, or a village or hamlet in the countryside. The term "country" can refer to a sovereign state. There is no universal agreement on the number of "countries" in the world since a number of states have disputed sovereignty status. By one application of the declarative theory of statehood and constitutive theory of statehood, there are 206 sovereign states; of which 193 are members of the United Nations, two have observer status at the UN (the Holy See and Palestine), and 11 others are neither a member nor observer at the UN. The latest proclaimed state is South Sudan since 2011. The degree of autonomy of non-sovereign countries varies widely. Some are possessions of sovereign states, as several states have overseas territories (such as French Polynesia or the British Virgin Islands), with citizenry at times identical and at times distinct from their own. Such territories, with the exception of distinct dependent territories, are usually listed together with sovereign states on lists of countries, but may nonetheless be treated as a separate "country of origin" in international trade, as Hong Kong is. A few states consist of a union of smaller polities which are considered countries: Several organizations seek to identify trends in order to produce country classifications. Countries are often distinguished as developing countries or developed countries. The United Nations The UN Department of Economic and Social Affairs annually produces the "World Economic Situation and Prospects" report that classified states as developed countries, economies in transition, or developing countries. The report classifies country development based on per capita gross national income. Within the broad categories, the United Nations identified subgroups based on geographical location or ad hoc criteria. The UN outlines the geographical regions for developing economies as Africa, East Asia, South Asia, Western Asia, and Latin America and the Caribbean. The 2019 report recognizes only developed countries in North America, Europe, and Asia and the Pacific. The majority of economies in transition and developing countries are found in Africa, Asia, and Latin America and the Caribbean. The UN additionally recognizes multiple trends that impact the developmental status of countries in the "World Economic Situation and Prospects". The report highlights fuel-exporting and fuel-importing countries, as well as small island developing states and landlocked developing countries. It also identifies heavily indebted poor countries. The World Bank The World Bank also classifies countries based on GNI per capita. Using the "World Bank Atlas method", it classifies countries as low-income economies, lower-middle-income economies, upper-middle-income economies, or high-income economies. For the 2020 fiscal year, the World Bank defines low-income economies as countries with a GNI per capita of $1,025 or less in 2018; lower middle-income economies as countries with a GNI per capita between $1,026 and $3,995; upper middle-income economies as countries with a GNI per capita between $3,996 and $12,375; high-income economies as countries with a GNI per capita of $12,376 or more. It also identifies regional trends. The World Bank defines its regions as East Asia and Pacific, Europe and Central Asia, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Sub-Saharan Africa. Lastly, the World Bank distinguishes countries based on the operational policies of the World Bank. The three categories include International Development Association (IDA) countries, International Bank for Reconstruction and Development (IBRD) countries, and Blend countries.
https://en.wikipedia.org/wiki?curid=5165
Copenhagen Copenhagen ( ) is the capital and most populous city of Denmark. As of 1 January 2020, the city had a population of 794,128 with 632,340 in Copenhagen Municipality, 104,305 in Frederiksberg Municipality, 42,989 in Tårnby Municipality, and 14,494 in Dragør Municipality. It forms the core of the wider urban area of Copenhagen (population 1,330,993) and the Copenhagen metropolitan area (population 2,057,142). Copenhagen is situated on the eastern coast of the island of Zealand; another small portion of the city is located on Amager, and it is separated from Malmö, Sweden, by the strait of Øresund. The Øresund Bridge connects the two cities by rail and road. Originally a Viking fishing village established in the 10th century in the vicinity of what is now Gammel Strand, Copenhagen became the capital of Denmark in the early 15th century. Beginning in the 17th century, it consolidated its position as a regional centre of power with its institutions, defences and armed forces. During the renaissance the city served as the de facto capital being the seat of government of the Kalmar Union, governing the entire present day Nordic region in a personal union with Sweden and Norway ruled by the Danish monarch serving as the head of state. The city flourished as the cultural and economic center of Scandinavia under the union for well over 120 years, starting in the 15th century up until the beginning of the 16th century when the union was dissolved with Sweden leaving the union through a rebellion. After a plague outbreak and fire in the 18th century, the city underwent a period of redevelopment. This included construction of the prestigious district of Frederiksstaden and founding of such cultural institutions as the Royal Theatre and the Royal Academy of Fine Arts. After further disasters in the early 19th century when Horatio Nelson attacked the Dano-Norwegian fleet and bombarded the city, rebuilding during the Danish Golden Age brought a Neoclassical look to Copenhagen's architecture. Later, following the Second World War, the Finger Plan fostered the development of housing and businesses along the five urban railway routes stretching out from the city centre. Since the turn of the 21st century, Copenhagen has seen strong urban and cultural development, facilitated by investment in its institutions and infrastructure. The city is the cultural, economic and governmental centre of Denmark; it is one of the major financial centres of Northern Europe with the Copenhagen Stock Exchange. Copenhagen's economy has seen rapid developments in the service sector, especially through initiatives in information technology, pharmaceuticals and clean technology. Since the completion of the Øresund Bridge, Copenhagen has become increasingly integrated with the Swedish province of Scania and its largest city, Malmö, forming the Øresund Region. With a number of bridges connecting the various districts, the cityscape is characterised by parks, promenades and waterfronts. Copenhagen's landmarks such as Tivoli Gardens, "The Little Mermaid" statue, the Amalienborg and Christiansborg palaces, Rosenborg Castle Gardens, Frederik's Church, and many museums, restaurants and nightclubs are significant tourist attractions. Copenhagen is home to the University of Copenhagen, the Technical University of Denmark, Copenhagen Business School and the IT University of Copenhagen. The University of Copenhagen, founded in 1479, is the oldest university in Denmark. Copenhagen is home to the FC København and Brøndby football clubs. The annual Copenhagen Marathon was established in 1980. Copenhagen is one of the most bicycle-friendly cities in the world. The Copenhagen Metro, launched in 2002, serves central Copenhagen. Additionally the Copenhagen S-train, the Lokaltog () and the Coast Line network serve and connect central Copenhagen to outlying boroughs. Serving roughly two million passengers a month, Copenhagen Airport, Kastrup, is the busiest airport in the Nordic countries. Copenhagen's name reflects its origin as a harbour and a place of commerce. The original designation in Old Norse, from which Danish descends, was Kaupmannahǫfn [ˈkaupmanːahɒvn] (cf. modern Icelandic: "Kaupmannahöfn" [ˈkʰøyhpmanːahœpn], Faroese "Keypmannahavn"), meaning "merchants' harbour". By the time Old Danish was spoken, the capital was called Køpmannæhafn, with the current name deriving from centuries of subsequent regular sound change. An exact English equivalent would be "chapman's haven". However, the English term for the city was adapted from its Low German name, "Kopenhagen". (English "chapman", German "Kaufmann", Dutch "koopman", Swedish "köpman", Danish "købmand", Icelandic "kaupmaður": in all these words, the first syllable comes ultimately from Latin "caupo", "tradesman".) Copenhagen's Swedish name is "Köpenhamn", a direct translation of the mutually intelligible Danish name. Although the earliest historical records of Copenhagen are from the end of the 12th century, recent archaeological finds in connection with work on the city's metropolitan rail system revealed the remains of a large merchant's mansion near today's Kongens Nytorv from c. 1020. Excavations in Pilestræde have also led to the discovery of a well from the late 12th century. The remains of an ancient church, with graves dating to the 11th century, have been unearthed near where Strøget meets Rådhuspladsen. These finds indicate that Copenhagen's origins as a city go back at least to the 11th century. Substantial discoveries of flint tools in the area provide evidence of human settlements dating to the Stone Age. Many historians believe the town dates to the late Viking Age, and was possibly founded by Sweyn I Forkbeard. The natural harbour and good herring stocks seem to have attracted fishermen and merchants to the area on a seasonal basis from the 11th century and more permanently in the 13th century. The first habitations were probably centred on Gammel Strand (literally "old shore") in the 11th century or even earlier. The earliest written mention of the town was in the 12th century when Saxo Grammaticus in Gesta Danorum referred to it as "Portus Mercatorum", meaning Merchants' Harbour or, in the Danish of the time, "Købmannahavn". Traditionally, Copenhagen's founding has been dated to Bishop Absalon's construction of a modest fortress on the little island of Slotsholmen in 1167 where Christiansborg Palace stands today. The construction of the fortress was in response to attacks by Wendish pirates who plagued the coastline during the 12th century. Defensive ramparts and moats were completed and by 1177 St. Clemens Church had been built. Attacks by the Germans continued, and after the original fortress was eventually destroyed by the marauders, islanders replaced it with Copenhagen Castle. In 1186, a letter from Pope Urban III states that the castle of "Hafn" (Copenhagen) and its surrounding lands, including the town of Hafn, were given to Absalon, Bishop of Roskilde 1158–1191 and Archbishop of Lund 1177–1201, by King Valdemar I. On Absalon's death, the property was to come into the ownership of the Bishopric of Roskilde. Around 1200, the Church of Our Lady was constructed on higher ground to the northeast of the town, which began to develop around it. As the town became more prominent, it was repeatedly attacked by the Hanseatic League, and in 1368 successfully invaded during the Second Danish-Hanseatic War. As the fishing industry thrived in Copenhagen, particularly in the trade of herring, the city began expanding to the north of Slotsholmen. In 1254, it received a charter as a city under Bishop Jakob Erlandsen who garnered support from the local fishing merchants against the king by granting them special privileges. In the mid 1330s, the first land assessment of the city was published. With the establishment of the Kalmar Union (1397–1523) between Denmark, Norway and Sweden, by about 1416 Copenhagen had emerged as the capital of Denmark when Eric of Pomerania moved his seat to Copenhagen Castle. The University of Copenhagen was inaugurated on 1 June 1479 by King Christian I, following approval from Pope Sixtus IV. This makes it the oldest university in Denmark and one of the oldest in Europe. Originally controlled by the Catholic Church, the university's role in society was forced to change during the Reformation in Denmark in the late 1530s. In disputes prior to the Reformation of 1536, the city which had been faithful to Christian II, who was Catholic, was successfully besieged in 1523 by the forces of Frederik I, who supported Lutheranism. Copenhagen's defences were reinforced with a series of towers along the city wall. After an extended siege from July 1535 to July 1536, during which the city supported Christian II's alliance with Malmö and Lübeck, it was finally forced to capitulate to Christian III. During the second half of the century, the city prospered from increased trade across the Baltic supported by Dutch shipping. Christoffer Valkendorff, a high-ranking statesman, defended the city's interests and contributed to its development. The Netherlands had also become primarily Protestant, as were northern German states. During the reign of Christian IV between 1588 and 1648, Copenhagen had dramatic growth as a city. On his initiative at the beginning of the 17th century, two important buildings were completed on Slotsholmen: the Tøjhus Arsenal and Børsen, the stock exchange. To foster international trade, the East India Company was founded in 1616. To the east of the city, inspired by Dutch planning, the king developed the district of Christianshavn with canals and ramparts. It was initially intended to be a fortified trading centre but ultimately became part of Copenhagen. Christian IV also sponsored an array of ambitious building projects including Rosenborg Slot and the Rundetårn. In 1658–59, the city withstood a siege by the Swedes under Charles X and successfully repelled a major assault. By 1661, Copenhagen had asserted its position as capital of Denmark and Norway. All the major institutions were located there, as was the fleet and most of the army. The defences were further enhanced with the completion of the Citadel in 1664 and the extension of Christianshavns Vold with its bastions in 1692, leading to the creation of a new base for the fleet at Nyholm. Copenhagen lost around 22,000 of its population of 65,000 to the plague in 1711. The city was also struck by two major fires which destroyed much of its infrastructure. The Copenhagen Fire of 1728 was the largest in the history of Copenhagen. It began on the evening of 20 October, and continued to burn until the morning of 23 October, destroying approximately 28% of the city, leaving some 20% of the population homeless. No less than 47% of the medieval section of the city was completely lost. Along with the 1795 fire, it is the main reason that few traces of the old town can be found in the modern city. A substantial amount of rebuilding followed. In 1733, work began on the royal residence of Christiansborg Palace which was completed in 1745. In 1749, development of the prestigious district of Frederiksstaden was initiated. Designed by Nicolai Eigtved in the Rococo style, its centre contained the mansions which now form Amalienborg Palace. Major extensions to the naval base of Holmen were undertaken while the city's cultural importance was enhanced with the Royal Theatre and the Royal Academy of Fine Arts. In the second half of the 18th century, Copenhagen benefited from Denmark's neutrality during the wars between Europe's main powers, allowing it to play an important role in trade between the states around the Baltic Sea. After Christiansborg was destroyed by fire in 1794 and another fire caused serious damage to the city in 1795, work began on the classical Copenhagen landmark of Højbro Plads while Nytorv and Gammel Torv were converged. On 2 April 1801, a British fleet under the command of Admiral Sir Hyde Parker attacked and defeated the neutral Danish-Norwegian fleet anchored near Copenhagen. Vice-Admiral Horatio Nelson led the main attack. He famously disobeyed Parker's order to withdraw, destroying many of the Dano-Norwegian ships before a truce was agreed. Copenhagen is often considered to be Nelson's hardest-fought battle, surpassing even the heavy fighting at Trafalgar. It was during this battle that Lord Nelson was said to have "put the telescope to the blind eye" in order not to see Admiral Parker's signal to cease fire. The Second Battle of Copenhagen (or the Bombardment of Copenhagen) (16 August – 5 September 1807) was from a British point of view a preemptive attack on Copenhagen, targeting the civilian population to yet again seize the Dano-Norwegian fleet. But from a Danish point of view the battle was a terror bombardment on their capital. Particularly notable was the use of incendiary Congreve rockets (containing phosphorus, which cannot be extinguished with water) that randomly hit the city. Few houses with straw roofs remained after the bombardment. The largest church, "Vor frue kirke", was destroyed by the sea artillery. Several historians consider this battle the first terror attack against a major European city in modern times. The British landed 30,000 men, they surrounded Copenhagen and the attack continued for the next three days, killing some 2,000 civilians and destroying most of the city. The devastation was so great because Copenhagen relied on an old defence-line whose limited range could not reach the British ships and their longer-range artillery. Despite the disasters of the early 19th century, Copenhagen experienced a period of intense cultural creativity known as the Danish Golden Age. Painting prospered under C.W. Eckersberg and his students while C.F. Hansen and Gottlieb Bindesbøll brought a Neoclassical look to the city's architecture. In the early 1850s, the ramparts of the city were opened to allow new housing to be built around The Lakes () that bordered the old defences to the west. By the 1880s, the districts of Nørrebro and Vesterbro developed to accommodate those who came from the provinces to participate in the city's industrialization. This dramatic increase of space was long overdue, as not only were the old ramparts out of date as a defence system but bad sanitation in the old city had to be overcome. From 1886, the west rampart (Vestvolden) was flattened, allowing major extensions to the harbour leading to the establishment of the Freeport of Copenhagen 1892–94. Electricity came in 1892 with electric trams in 1897. The spread of housing to areas outside the old ramparts brought about a huge increase in the population. In 1840, Copenhagen was inhabited by approximately 120,000 people. By 1901, it had some 400,000 inhabitants. By the beginning of the 20th century, Copenhagen had become a thriving industrial and administrative city. With its new city hall and railway station, its centre was drawn towards the west. New housing developments grew up in Brønshøj and Valby while Frederiksberg became an enclave within the city of Copenhagen. The northern part of Amager and Valby were also incorporated into the City of Copenhagen in 1901–02. As a result of Denmark's neutrality in the First World War, Copenhagen prospered from trade with both Britain and Germany while the city's defences were kept fully manned by some 40,000 soldiers for the duration of the war. In the 1920s there were serious shortages of goods and housing. Plans were drawn up to demolish the old part of Christianshavn and to get rid of the worst of the city's slum areas. However, it was not until the 1930s that substantial housing developments ensued, with the demolition of one side of Christianhavn's Torvegade to build five large blocks of flats. In Denmark during World War II, Copenhagen was occupied by German troops along with the rest of the country from 9 April 1940 until 4 May 1945. German leader Adolf Hitler hoped that Denmark would be "a model protectorate" and initially the Nazi authorities sought to arrive at an understanding with the Danish government. The 1943 Danish parliamentary election was also allowed to take place, with only the Communist Party excluded. But in August 1943, after the government's collaboration with the occupation forces collapsed, several ships were sunk in Copenhagen Harbor by the Royal Danish Navy to prevent their use by the Germans. Around that time the Nazis started to arrest Jews, although most managed to escape to Sweden. In 1945 Ole Lippman, leader of the Danish section of the Special Operations Executive, invited the British Royal Air Force to assist their operations by attacking Nazi headquarters in Copenhagen. Accordingly, air vice-marshal Sir Basil Embry drew up plans for a spectacular precision attack on the Sicherheitsdienst and Gestapo building, the former offices of the Shell Oil Company. Political prisoners were kept in the attic to prevent an air raid, so the RAF had to bomb the lower levels of the building. The attack, known as "Operation Carthage", came on 22 March 1945, in three small waves. In the first wave, all six planes (carrying one bomb each) hit their target, but one of the aircraft crashed near Frederiksberg Girls School. Because of this crash, four of the planes in the two following waves assumed the school was the military target and aimed their bombs at the school, leading to the death of 123 civilians (of which 87 were schoolchildren). However, 18 of the 26 political prisoners in the Shell Building managed to escape while the Gestapo archives were completely destroyed. On 8 May 1945 Copenhagen was officially liberated by British troops commanded by Field Marshal Bernard Montgomery who supervised the surrender of 30,000 Germans situated around the capital. Shortly after the end of the war, an innovative urban development project known as the Finger Plan was introduced in 1947, encouraging the creation of new housing and businesses interspersed with large green areas along five "fingers" stretching out from the city centre along the S-train routes. With the expansion of the welfare state and women entering the work force, schools, nurseries, sports facilities and hospitals were established across the city. As a result of student unrest in the late 1960s, the former Bådsmandsstræde Barracks in Christianshavn was occupied, leading to the establishment of Freetown Christiania in September 1971. Motor traffic in the city grew significantly and in 1972 the trams were replaced by buses. From the 1960s, on the initiative of the young architect Jan Gehl, pedestrian streets and cycle tracks were created in the city centre. Activity in the port of Copenhagen declined with the closure of the Holmen Naval Base. Copenhagen Airport underwent considerable expansion, becoming a hub for the Nordic countries. In the 1990s, large-scale housing developments were realized in the harbour area and in the west of Amager. The national library's Black Diamond building on the waterfront was completed in 1999. Since the summer of 2000, Copenhagen and the Swedish city of Malmö have been connected by the Øresund Bridge, which carries rail and road traffic. As a result, Copenhagen has become the centre of a larger metropolitan area spanning both nations. The bridge has brought about considerable changes in the public transport system and has led to the extensive redevelopment of Amager. The city's service and trade sectors have developed while a number of banking and financial institutions have been established. Educational institutions have also gained importance, especially the University of Copenhagen with its 35,000 students. Another important development for the city has been the Copenhagen Metro, the railway system which opened in 2002 with additions until 2007, transporting some 54 million passengers by 2011. On the cultural front, the Copenhagen Opera House, a gift to the city from the shipping magnate Mærsk Mc-Kinney Møller on behalf of the A.P. Møller foundation, was completed in 2004. In December 2009 Copenhagen gained international prominence when it hosted the worldwide climate meeting COP15. Copenhagen is part of the Øresund Region, which consists of Zealand, Lolland-Falster and Bornholm in Denmark and Scania in Sweden. It is located on the eastern shore of the island of Zealand, partly on the island of Amager and on a number of natural and artificial islets between the two. Copenhagen faces the Øresund to the east, the strait of water that separates Denmark from Sweden, and which connects the North Sea with the Baltic Sea. The Swedish towns of Malmö and Landskrona lie on the Swedish side of the sound directly across from Copenhagen. By road, Copenhagen is northwest of Malmö, Sweden, northeast of Næstved, northeast of Odense, east of Esbjerg and southeast of Aarhus by sea and road via Sjællands Odde. The city centre lies in the area originally defined by the old ramparts, which are still referred to as the Fortification Ring ("Fæstningsringen") and kept as a partial green band around it. Then come the late-19th- and early-20th-century residential neighbourhoods of Østerbro, Nørrebro, Vesterbro and Amagerbro. The outlying areas of Kongens Enghave, Valby, Vigerslev, Vanløse, Brønshøj, Utterslev and Sundby followed from 1920 to 1960. They consist mainly of residential housing and apartments often enhanced with parks and greenery. The central area of the city consists of relatively low-lying flat ground formed by moraines from the last ice age while the hilly areas to the north and west frequently rise to above sea level. The slopes of Valby and Brønshøj reach heights of over , divided by valleys running from the northeast to the southwest. Close to the centre are the Copenhagen lakes of Sortedams Sø, Peblinge Sø and Sankt Jørgens Sø. Copenhagen rests on a subsoil of flint-layered limestone deposited in the Danian period some 60 to 66 million years ago. Some greensand from the Selandian is also present. There are a few faults in the area, the most important of which is the Carlsberg fault which runs northwest to southeast through the centre of the city. During the last ice age, glaciers eroded the surface leaving a layer of moraines up to thick. Geologically, Copenhagen lies in the northern part of Denmark where the land is rising because of post-glacial rebound. Amager Strandpark, which opened in 2005, is a long artificial island, with a total of of beaches. It is located just 15 minutes by bicycle or a few minutes by metro from the city centre. In Klampenborg, about 10 kilometers from downtown Copenhagen, is Bellevue Beach. It is long and has both lifeguards and freshwater showers on the beach. The beaches are supplemented by a system of Harbour Baths along the Copenhagen waterfront. The first and most popular of these is located at Islands Brygge and has won international acclaim for its design. Copenhagen is in the oceanic climate zone (Köppen: "Cfb "). Its weather is subject to low-pressure systems from the Atlantic which result in unstable conditions throughout the year. Apart from slightly higher rainfall from July to September, precipitation is moderate. While snowfall occurs mainly from late December to early March, there can also be rain, with average temperatures around the freezing point. June is the sunniest month of the year with an average of about eight hours of sunshine a day. July is the warmest month with an average daytime high of 21 °C. By contrast, the average hours of sunshine are less than two per day in November and only one and a half per day from December to February. In the spring, it gets warmer again with four to six hours of sunshine per day from March to May. February is the driest month of the year. Exceptional weather conditions can bring as much as 50 cm of snow to Copenhagen in a 24-hour period during the winter months while summer temperatures have been known to rise to heights of . Because of Copenhagen's northern latitude, the number of daylight hours varies considerably between summer and winter. On the summer solstice, the sun rises at 04:26 and sets at 21:58, providing 17 hours 32 minutes of daylight. On the winter solstice, it rises at 08:37 and sets at 15:39 with 7 hours and 1 minute of daylight. There is therefore a difference of 10 hours and 31 minutes in the length of days and nights between the summer and winter solstices. According to Statistics Denmark, the urban area of Copenhagen () consists of the municipalities of Copenhagen, Frederiksberg, Albertslund, Brøndby, Gentofte, Gladsaxe, Glostrup, Herlev, Hvidovre, Lyngby-Taarbæk, Rødovre, Tårnby and Vallensbæk as well as parts of Ballerup, Rudersdal and Furesø municipalities, along with the cities of Ishøj and Greve Strand. They are located in the Capital Region (). Municipalities are responsible for a wide variety of public services, which include land-use planning, environmental planning, public housing, management and maintenance of local roads, and social security. Municipal administration is also conducted by a mayor, a council, and an executive. Copenhagen Municipality is by far the largest municipality, with the historic city at its core. The seat of Copenhagen's municipal council is the Copenhagen City Hall (""), which is situated on City Hall Square. The second largest municipality is Frederiksberg, an enclave within Copenhagen Municipality. Copenhagen Municipality is divided into ten districts ("bydele"): Indre By, Østerbro, Nørrebro, Vesterbro/Kongens Enghave, Valby, Vanløse, Brønshøj-Husum, Bispebjerg, Amager Øst, and Amager Vest. Neighbourhoods of Copenhagen include Slotsholmen, Frederiksstaden, Islands Brygge, Holmen, Christiania, Carlsberg, Sluseholmen, Sydhavn, Amagerbro, Ørestad, Nordhavnen, Bellahøj, Brønshøj, Ryparken, and Vigerslev. Most of Denmark's top legal courts and institutions are based in Copenhagen. A modern style court of justice, "Hof- og Stadsretten", was introduced in Denmark, specifically for Copenhagen, by Johann Friedrich Struensee in 1771. Now known as the City Court of Copenhagen ("Københavns Byret"), it is the largest of the 24 city courts in Denmark with jurisdiction over the municipalities of Copenhagen, Dragør and Tårnby. With its 42 judges, it has a Probate Division, an Enforcement Division and a Registration and Notorial Acts Division while bankruptcy is handled by the Maritime and Commercial Court of Copenhagen. Established in 1862, the Maritime and Commercial Court ("Sø- og Handelsretten") also hears commercial cases including those relating to trade marks, marketing practices and competition for the whole of Denmark. Denmark's Supreme Court ("Højesteret"), located in Christiansborg Palace on Prins Jørgens Gård in the centre of Copenhagen, is the country's final court of appeal. Handling civil and criminal cases from the subordinate courts, it has two chambers which each hear all types of cases. The Danish National Police and Copenhagen Police headquarters is situated in the Neoclassical-inspired Politigården building built in 1918–24 under architects Hack Kampmann and Holger Alfred Jacobsen. The building also contains administration, management, emergency department and radio service offices. In their efforts to deal with drugs, the police have noted considerable success in the two special drug consumption rooms opened by the city where addicts can use sterile needles and receive help from nurses if necessary. Use of these rooms does not lead to prosecution; the city treats drug use as a public health issue, not a criminal one. The Copenhagen Fire Department forms the largest municipal fire brigade in Denmark with some 500 fire and ambulance personnel, 150 administration and service workers, and 35 workers in prevention. The brigade began as the Copenhagen Royal Fire Brigade on 9 July 1687 under King Christian V. After the passing of the Copenhagen Fire Act on 18 May 1868, on 1 August 1870 the Copenhagen Fire Brigade became a municipal institution in its own right. The fire department has its headquarters in the Copenhagen Central Fire Station which was designed by Ludvig Fenger in the Historicist style and inaugurated in 1892. Copenhagen is recognized as one of the most environmentally friendly cities in the world. As a result of its commitment to high environmental standards, Copenhagen has been praised for its green economy, ranked as the top green city for the second time in the 2014 "Global Green Economy Index (GGEI)". In 2001 a large offshore wind farm was built just off the coast of Copenhagen at Middelgrunden. It produces about 4% of the city's energy. Years of substantial investment in sewage treatment have improved water quality in the harbour to an extent that the inner harbour can be used for swimming with facilities at a number of locations. Copenhagen aims to be carbon-neutral by 2025. Commercial and residential buildings are to reduce electricity consumption by 20 percent and 10 percent respectively, and total heat consumption is to fall by 20 percent by 2025. Renewable energy features such as solar panels are becoming increasingly common in the newest buildings in Copenhagen. District heating will be carbon-neutral by 2025, by waste incineration and biomass. New buildings must now be constructed according to Low Energy Class ratings and in 2020 near net-zero energy buildings. By 2025, 75% of trips should be made on foot, by bike, or by using public transit. The city plans that 20–30% of cars will run on electricity or biofuel by 2025. The investment is estimated at $472 million public funds and $4.78 billion private funds. The city's urban planning authorities continue to take full account of these priorities. Special attention is given both to climate issues and efforts to ensure maximum application of low-energy standards. Priorities include sustainable drainage systems, recycling rainwater, green roofs and efficient waste management solutions. In city planning, streets and squares are to be designed to encourage cycling and walking rather than driving. Further, the city administration is working with smart city initiatives to improve how data and technology can be used to implement new solutions that support the transition toward a carbon-neutral economy. These solutions support operations covered by the city administration to improve e.g. public health, district heating, urban mobility and waste management systems. Smart city operations in Copenhagen are maintained by Copenhagen Solutions Lab, the city's official smart-city development unit under the Technical and Environmental Administration. Copenhagen is the most populous city in Denmark and one of the most populous in the Nordic countries. For statistical purposes, Statistics Denmark considers the City of Copenhagen ("Byen København") to consist of the Municipality of Copenhagen plus three adjacent municipalities: Dragør, Frederiksberg, and Tårnby. Their combined population stands at 763,908 (). The Municipality of Copenhagen is by far the most populous in the country and one of the most populous Nordic municipalities with 601,448 inhabitants (). There was a demographic boom in the 1990s and first decade of the 21st century, largely due to immigration to Denmark. According to figures from the first quarter of 2016, approximately 76% of the municipality's population was of Danish descent, defined as having at least one parent who was born in Denmark and has Danish citizenship. Much of the remaining 24% were of a foreign background, defined as immigrants (18%) or descendants of recent immigrants (6%). There are no official statistics on ethnic groups. The adjacent table shows the most common countries of birth of Copenhagen residents. According to Statistics Denmark, Copenhagen's urban area has a larger population of 1,280,371 (). The urban area consists of the municipalities of Copenhagen and Frederiksberg plus 16 of the 20 municipalities of the former counties Copenhagen and Roskilde, though five of them only partially. Metropolitan Copenhagen has a total of 2,016,285 inhabitants (). The area of Metropolitan Copenhagen is defined by the Finger Plan. Since the opening of the Øresund Bridge in 2000, commuting between Zealand and Scania in Sweden has increased rapidly, leading to a wider, integrated area. Known as the Øresund Region, it has 3.8 million inhabitants (of whom 2.5 million live in the Danish part of the region). A majority (56.9%) of those living in Copenhagen are members of the Lutheran Church of Denmark which is 0.6% lower than one year earlier according to 2019 figures. The National Cathedral, the Church of Our Lady, is one of the dozens of churches in Copenhagen. There are also several other Christian communities in the city, of which the largest is Roman Catholic. Foreign migration to Copenhagen, rising over the last three decades, has contributed to increasing religious diversity; the Grand Mosque of Copenhagen, the first in Denmark, opened in 2014. Islam is the second largest religion in Copenhagen, accounting for approximately 10% of the population. While there are no official statistics, a significant portion of the estimated 175,000–200,000 Muslims in the country live in the Copenhagen urban area, with the highest concentration in Nørrebro and the Vestegnen. There are also some 7,000 Jews in Denmark, most of them in the Copenhagen area where there are several synagogues. There is a long history of Jews in the city, and the first synagogue in Copenhagen was built in 1684. Today, the history of the Jews of Denmark can be explored at the Danish Jewish Museum in Copenhagen. For a number of years, Copenhagen has ranked high in international surveys for its quality of life. Its stable economy together with its education services and level of social safety make it attractive for locals and visitors alike. Although it is one of the world's most expensive cities, it is also one of the most liveable with its public transport, facilities for cyclists and its environmental policies. In elevating Copenhagen to "most liveable city" in 2013, "Monocle" pointed to its open spaces, increasing activity on the streets, city planning in favour of cyclists and pedestrians, and features to encourage inhabitants to enjoy city life with an emphasis on community, culture and cuisine. Other sources have ranked Copenhagen high for its business environment, accessibility, restaurants and environmental planning. However, Copenhagen ranks only 39th for student friendliness in 2012. Despite a top score for quality of living, its scores were low for employer activity and affordability. Copenhagen is the major economic and financial centre of Denmark. The city's economy is based largely on services and commerce. Statistics for 2010 show that the vast majority of the 350,000 workers in Copenhagen are employed in the service sector, especially transport and communications, trade, and finance, while less than 10,000 work in the manufacturing industries. The public sector workforce is around 110,000, including education and healthcare. From 2006 to 2011, the economy grew by 2.5% in Copenhagen, while it fell by some 4% in the rest of Denmark. In 2017, the wider Capital Region of Denmark had a gross domestic product (GDP) of €120 billion, and the 15th largest GDP per capita of regions in the European Union. Several financial institutions and banks have headquarters in Copenhagen, including Alm. Brand, Danske Bank, Nykredit and Nordea Bank Danmark. The Copenhagen Stock Exchange (CSE) was founded in 1620 and is now owned by Nasdaq, Inc.. Copenhagen is also home to a number of international companies including A.P. Møller-Mærsk, Novo Nordisk, Carlsberg and Novozymes. City authorities have encouraged the development of business clusters in several innovative sectors, which include information technology, biotechnology, pharmaceuticals, clean technology and smart city solutions. Life science is a key sector with extensive research and development activities. Medicon Valley is a leading bi-national life sciences cluster in Europe, spanning the Øresund Region. Copenhagen is rich in companies and institutions with a focus on research and development within the field of biotechnology, and the Medicon Valley initiative aims to strengthen this position and to promote cooperation between companies and academia. Many major Danish companies like Novo Nordisk and Lundbeck, both of which are among the 50 largest pharmaceutical and biotech companies in the world, are located in this business cluster. Shipping is another import sector with Maersk, the world's largest shipping company, having their world headquarters in Copenhagen. The city has an industrial harbour, Copenhagen Port. Following decades of stagnation, it has experienced a resurgence since 1990 following a merger with Malmö harbour. Both ports are operated by Copenhagen Malmö Port (CMP). The central location in the Øresund Region allows the ports to act as a hub for freight that is transported onward to the Baltic countries. CMP annually receives about 8,000 ships and handled some 148,000 TEU in 2012. Copenhagen has some of the highest gross wages in the world. High taxes mean that wages are reduced after mandatory deduction. A "beneficial researcher scheme" with low taxation of foreign specialists has made Denmark an attractive location for foreign labour. It is however also among the most expensive cities in Europe. Denmark's Flexicurity model features some of the most flexible hiring and firing legislation in Europe, providing attractive conditions for foreign investment and international companies looking to locate in Copenhagen. In Dansk Industri's 2013 survey of employment factors in the ninety-six municipalities of Denmark, Copenhagen came in first place for educational qualifications and for the development of private companies in recent years, but fell to 86th place in local companies' assessment of the employment climate. The survey revealed considerable dissatisfaction in the level of dialogue companies enjoyed with the municipal authorities. Tourism is a major contributor to Copenhagen's economy, attracting visitors due to the city's harbour, cultural attractions and award-winning restaurants. Since 2009, Copenhagen has been one of the fastest growing metropolitan destinations in Europe. Hotel capacity in the city is growing significantly. From 2009 to 2013, it experienced a 42% growth in international bed nights (total number of nights spent by tourists), tallying a rise of nearly 70% for Chinese visitors. The total number of bed nights in the Capital Region surpassed 9 million in 2013, while international bed nights reached 5 million. In 2010, it is estimated that city break tourism contributed to DKK 2 billion in turnover. However, 2010 was an exceptional year for city break tourism and turnover increased with 29% in that one year. 680,000 cruise passengers visited the port in 2015. In 2019 Copenhagen was ranked first among Lonely Planet's top ten cities to visit. The city's appearance today is shaped by the key role it has played as a regional centre for centuries. Copenhagen has a multitude of districts, each with its distinctive character and representing its own period. Other distinctive features of Copenhagen include the abundance of water, its many parks, and the bicycle paths that line most streets. The oldest section of Copenhagen's inner city is often referred to as "Middelalderbyen" (the medieval city). However, the city's most distinctive district is Frederiksstaden, developed during the reign of Frederick V. It has the Amalienborg Palace at its centre and is dominated by the dome of Frederik's Church (or the Marble Church) and several elegant 18th-century Rococo mansions. The inner city includes Slotsholmen, a little island on which Christiansborg Palace stands and Christianshavn with its canals. Børsen on Slotsholmen and Frederiksborg Palace in Hillerød are prominent examples of the Dutch Renaissance style in Copenhagen. Around the historical city centre lies a band of congenial residential boroughs (Vesterbro, Inner Nørrebro, Inner Østerbro) dating mainly from late 19th century. They were built outside the old ramparts when the city was finally allowed to expand beyond its fortifications. Sometimes referred to as "the City of Spires", Copenhagen is known for its horizontal skyline, broken only by the spires and towers of its churches and castles. Most characteristic of all is the Baroque spire of the Church of Our Saviour with its narrowing external spiral stairway that visitors can climb to the top. Other important spires are those of Christiansborg Palace, the City Hall and the former Church of St. Nikolaj that now houses a modern art venue. Not quite so high are the Renaissance spires of Rosenborg Castle and the "dragon spire" of Christian IV's former stock exchange, so named because it resembles the intertwined tails of four dragons. Copenhagen is recognised globally as an exemplar of best practice urban planning. Its thriving mixed use city centre is defined by striking contemporary architecture, engaging public spaces and an abundance of human activity. These design outcomes have been deliberately achieved through careful replanning in the second half of the 20th century. Recent years have seen a boom in modern architecture in Copenhagen both for Danish architecture and for works by international architects. For a few hundred years, virtually no foreign architects had worked in Copenhagen, but since the turn of the millennium the city and its immediate surroundings have seen buildings and projects designed by top international architects. British design magazine "Monocle" named Copenhagen the "World's best design city 2008". Copenhagen's urban development in the first half of the 20th century was heavily influenced by industrialisation. After World War II, Copenhagen Municipality adopted Fordism and repurposed its medieval centre to facilitate private automobile infrastructure in response to innovations in transport, trade and communication. Copenhagen's spatial planning in this time frame was characterised by the separation of land uses: an approach which requires residents to travel by car to access facilities of different uses. The boom in urban development and modern architecture has brought some changes to the city's skyline. A political majority has decided to keep the historical centre free of high-rise buildings, but several areas will see or have already seen massive urban development. Ørestad now has seen most of the recent development. Located near Copenhagen Airport, it currently boasts one of the largest malls in Scandinavia and a variety of office and residential buildings as well as the IT University and a high school. Copenhagen is a green city with many parks, both large and small. King's Garden (""), the garden of Rosenborg Castle, is the oldest and most frequented of them all. It was Christian IV who first developed its landscaping in 1606. Every year it sees more than 2.5 million visitors and in the summer months it is packed with sunbathers, picnickers and ballplayers. It serves as a sculpture garden with both a permanent display and temporary exhibits during the summer months. Also located in the city centre are the Botanical Gardens noted for their large complex of 19th-century greenhouses donated by Carlsberg founder J. C. Jacobsen. Fælledparken at is the largest park in Copenhagen. It is popular for sports fixtures and hosts several annual events including a free opera concert at the opening of the opera season, other open-air concerts, carnival and Labour Day celebrations, and the Copenhagen Historic Grand Prix, a race for antique cars. A historical green space in the northeastern part of the city is Kastellet, a well-preserved Renaissance citadel that now serves mainly as a park. Another popular park is the Frederiksberg Gardens, a 32-hectare romantic landscape park. It houses a colony of tame grey herons and other waterfowl. The park offers views of the elephants and the elephant house designed by world-famous British architect Norman Foster of the adjacent Copenhagen Zoo. Langelinie, a park and promenade along the inner Øresund coast, is home to one of Copenhagen's most-visited tourist attractions, the Little Mermaid statue. In Copenhagen, many cemeteries double as parks, though only for the more quiet activities such as sunbathing, reading and meditation. Assistens Cemetery, the burial place of Hans Christian Andersen, is an important green space for the district of Inner Nørrebro and a Copenhagen institution. The lesser known Vestre Kirkegaard is the largest cemetery in Denmark () and offers a maze of dense groves, open lawns, winding paths, hedges, overgrown tombs, monuments, tree-lined avenues, lakes and other garden features. It is official municipal policy in Copenhagen that by 2015 all citizens must be able to reach a park or beach on foot in less than 15 minutes. In line with this policy, several new parks, including the innovative Superkilen in the Nørrebro district, have been completed or are under development in areas lacking green spaces. The historic centre of the city, Indre By or the Inner City, features many of Copenhagen's most popular monuments and attractions. The area known as Frederiksstaden, developed by Frederik V in the second half of the 18th century in the Rococo style, has the four mansions of Amalienborg, the royal residence, and the wide-domed Marble Church at its centre. Directly across the water from Amalienborg, the recently completed Copenhagen Opera stands on the island of Holmen. To the south of Frederiksstaden, the Nyhavn canal is lined with colourful houses from the 17th and 18th centuries, many now with lively restaurants and bars. The canal runs from the harbour front to the spacious square of Kongens Nytorv which was laid out by Christian V in 1670. Important buildings include Charlottenborg Palace, famous for its art exhibitions, the Thott Palace (now the French embassy), the Royal Danish Theatre and the Hotel D'Angleterre, dated to 1755. Other landmarks in Indre By include the parliament building of Christiansborg, the City Hall and Rundetårn, originally an observatory. There are also several museums in the area including Thorvaldsen Museum dedicated to the 18th-century sculptor Bertel Thorvaldsen. Closed to traffic since 1964, Strøget, the world's oldest and longest pedestrian street, runs the from Rådhuspladsen to Kongens Nytorv. With its speciality shops, cafés, restaurants, and buskers, it is always full of life and includes the old squares of Gammel Torv and Amagertorv, each with a fountain. Rosenborg Castle on Øster Voldgade was built by Christian IV in 1606 as a summer residence in the Renaissance style. It houses the Danish crown jewels and crown regalia, the coronation throne and tapestries illustrating Christian V's victories in the Scanian War. Christianshavn lies to the southeast of Indre By on the other side of the harbour. The area was developed by Christian IV in the early 17th century. Impressed by the city of Amsterdam, he employed Dutch architects to create canals within its ramparts which are still well preserved today. The canals themselves, branching off the central Christianshavn Canal and lined with house boats and pleasure craft are one of the area's attractions. Another interesting feature is Freetown Christiania, a fairly large area which was initially occupied by squatters during student unrest in 1971. Today it still maintains a measure of autonomy. The inhabitants openly sell drugs on "Pusher Street" as well as their arts and crafts. Other buildings of interest in Christianshavn include the Church of Our Saviour with its spiralling steeple and the magnificent Rococo Christian's Church. Once a warehouse, the North Atlantic House now displays culture from Iceland and Greenland and houses the Noma restaurant, known for its Nordic cuisine. Vesterbro, to the southwest of Indre By, begins with the Tivoli Gardens, the city's top tourist attraction with its fairground atmosphere, its Pantomime Theatre, its Concert Hall and its many rides and restaurants. The Carlsberg neighbourhood has some interesting vestiges of the old brewery of the same name including the Elephant Gate and the Ny Carlsberg Brewhouse. The Tycho Brahe Planetarium is located on the edge of Skt. Jørgens Sø, one of the Copenhagen lakes. Halmtorvet, the old haymarket behind the Central Station, is an increasingly popular area with its cafés and restaurants. The former cattle market Øksnehallen has been converted into a modern exhibition centre for art and photography. Radisson Blu Royal Hotel, built by Danish architect and designer Arne Jacobsen for the airline Scandinavian Airlines System (SAS) between 1956 and 1960 was once the tallest hotel in Denmark with a height of and the city's only skyscraper until 1969. Completed in 1908, Det Ny Teater (the New Theatre) located in a passage between Vesterbrogade and Gammel Kongevej has become a popular venue for musicals since its reopening in 1994, attracting the largest audiences in the country. Nørrebro to the northwest of the city centre has recently developed from a working-class district into a colourful cosmopolitan area with antique shops, non-Danish food stores and restaurants. Much of the activity is centred on Sankt Hans Torv and around Rantzausgade. Copenhagen's historic cemetery, Assistens Kirkegård halfway up Nørrebrogade, is the resting place of many famous figures including Søren Kierkegaard, Niels Bohr, and Hans Christian Andersen but is also used by locals as a park and recreation area. Just north of the city centre, Østerbro is an upper middle-class district with a number of fine mansions, some now serving as embassies. The district stretches from Nørrebro to the waterfront where "The Little Mermaid" statue can be seen from the promenade known as Langelinie. Inspired by Hans Christian Andersen's fairy tale, it was created by Edvard Eriksen and unveiled in 1913. Not far from the Little Mermaid, the old Citadel ("Kastellet") can be seen. Built by Christian IV, it is one of northern Europe's best preserved fortifications. There is also a windmill in the area. The large Gefion Fountain ("Gefionspringvandet") designed by Anders Bundgaard and completed in 1908 stands close to the southeast corner of Kastellet. Its figures illustrate a Nordic legend. Frederiksberg, a separate municipality within the urban area of Copenhagen, lies to the west of Nørrebro and Indre By and north of Vesterbro. Its landmarks include Copenhagen Zoo founded in 1869 with over 250 species from all over the world and Frederiksberg Palace built as a summer residence by Frederick IV who was inspired by Italian architecture. Now a military academy, it overlooks the extensive landscaped Frederiksberg Gardens with its follies, waterfalls, lakes and decorative buildings. The wide tree-lined avenue of Frederiksberg Allé connecting Vesterbrogade with the Frederiksberg Gardens has long been associated with theatres and entertainment. While a number of the earlier theatres are now closed, the Betty Nansen Theatre and Aveny-T are still active. Amagerbro (also known as Sønderbro) is the district located immediately south-east of Christianshavn at northernmost Amager. The old city moats and their surrounding parks constitute a clear border between these districts. The main street is Amagerbrogade which after the harbour bridge Langebro, is an extension of H. C. Andersens Boulevard and has a number of various stores and shops as well as restaurants and pubs. Amagerbro was built up during the two first decades of the twentieth century and is the city's northernmost block built area with typically 4–7 floors. Further south follows the Sundbyøster and Sundbyvester districts. Hellerup is the city's northernmost district with a central city feeling. It's located north of Østerbro and is known to be the perhaps most fashionable part of Copenhagen. Along Strandvejen (a street and further north a road leading to Elsinore) various shops, stores and restaurants are located. And by its crossing streets plenty of large villas are found. Hellerup is not a part of Copenhagen municipality, but constitutes the eastern parts of Gentofte. Hellerup is nevertheless a typical city environment without any notable bound towards the south. Not far from Copenhagen Airport on the Kastrup coast, The Blue Planet completed in March 2013 now houses the national aquarium. With its 53 aquariums, it is the largest facility of its kind in Scandinavia. Grundtvig's Church, located in the northern suburb of Bispebjerg, was designed by P.V. Jensen Klint and completed in 1940. A rare example of Expressionist church architecture, its striking west façade is reminiscent of a church organ. Apart from being the national capital, Copenhagen also serves as the cultural hub of Denmark and wider Scandinavia. Since the late 1990s, it has undergone a transformation from a modest Scandinavian capital into a metropolitan city of international appeal in the same league as Barcelona and Amsterdam. This is a result of huge investments in infrastructure and culture as well as the work of successful new Danish architects, designers and chefs. Copenhagen Fashion Week, the largest fashion event in Northern Europe, takes place every year in February and August. Copenhagen has a wide array of museums of international standing. The National Museum, "Nationalmuseet", is Denmark's largest museum of archaeology and cultural history, comprising the histories of Danish and foreign cultures alike. Denmark's National Gallery ("Statens Museum for Kunst") is the national art museum with collections dating from the 12th century to the present. In addition to Danish painters, artists represented in the collections include Rubens, Rembrandt, Picasso, Braque, Léger, Matisse, Emil Nolde, Olafur Eliasson, Elmgreen and Dragset, Superflex and Jens Haaning. Another important Copenhagen art museum is the Ny Carlsberg Glyptotek founded by second generation Carlsberg philanthropist Carl Jacobsen and built around his personal collections. Its main focus is classical Egyptian, Roman and Greek sculptures and antiquities and a collection of Rodin sculptures, the largest outside France. Besides its sculpture collections, the museum also holds a comprehensive collection of paintings of Impressionist and Post-Impressionist painters such as Monet, Renoir, Cézanne, van Gogh and Toulouse-Lautrec as well as works by the Danish Golden Age painters. Louisiana is a Museum of Modern Art situated on the coast just north of Copenhagen. It is located in the middle of a sculpture garden on a cliff overlooking Øresund. Its collection of over 3,000 items includes works by Picasso, Giacometti and Dubuffet. The Danish Design Museum is housed in the 18th-century former Frederiks Hospital and displays Danish design as well as international design and crafts. Other museums include: the Thorvaldsens Museum, dedicated to the oeuvre of romantic Danish sculptor Bertel Thorvaldsen who lived and worked in Rome; the Cisternerne museum, an exhibition space for contemporary art, located in former cisterns that come complete with stalactites formed by the changing water levels; and the Ordrupgaard Museum, located just north of Copenhagen, which features 19th-century French and Danish art and is noted for its works by Paul Gauguin. The new Copenhagen Concert Hall opened in January 2009. Designed by Jean Nouvel, it has four halls with the main auditorium seating 1,800 people. It serves as the home of the Danish National Symphony Orchestra and along with the Walt Disney Concert Hall in Los Angeles is the most expensive concert hall ever built. Another important venue for classical music is the Tivoli Concert Hall located in the Tivoli Gardens. Designed by Henning Larsen, the Copenhagen Opera House ("Operaen") opened in 2005. It is among the most modern opera houses in the world. The Royal Danish Theatre also stages opera in addition to its drama productions. It is also home to the Royal Danish Ballet. Founded in 1748 along with the theatre, it is one of the oldest ballet troupes in Europe, and is noted for its Bournonville style of ballet. Copenhagen has a significant jazz scene that has existed for many years. It developed when a number of American jazz musicians such as Ben Webster, Thad Jones, Richard Boone, Ernie Wilkins, Kenny Drew, Ed Thigpen, Bob Rockwell, Dexter Gordon, and others such as rock guitarist Link Wray came to live in Copenhagen during the 1960s. Every year in early July, Copenhagen's streets, squares, parks as well as cafés and concert halls fill up with big and small jazz concerts during the Copenhagen Jazz Festival. One of Europe's top jazz festivals, the annual event features around 900 concerts at 100 venues with over 200,000 guests from Denmark and around the world. The largest venue for popular music in Copenhagen is Vega in the Vesterbro district. It was chosen as "best concert venue in Europe" by international music magazine "Live". The venue has three concert halls: the great hall, Store Vega, accommodates audiences of 1,550, the middle hall, Lille Vega, has space for 500 and Ideal Bar Live has a capacity of 250. Every September since 2006, the Festival of Endless Gratitude (FOEG) has taken place in Copenhagen. This festival focuses on indie counterculture, experimental pop music and left field music combined with visual arts exhibitions. Copenhagen is home to the "K-Town" punk and hardcore music community. This community developed around the underground scene venue Ungdomshuset in the late 1990s punk scene, with punk- and hardcore acts such as Snipers, Amdi Petersens Armé, Gorilla Angreb, Young Wasteners, and No Hope for the Kids emerging as significant bands. The term "K-town" got international recognition within the punk-scene with the emergence of "K-Town" festivals. In 2001, the first of these was held in Ungdomshuset, on Jagtvej 69, Nørrebro, Copenhagen. The festival temporarily moved to Freetown Christiania after Ungdomshuset was evicted from its original location until a new Ungdomshuset location was opened on Dortheavej 61. For free entertainment one can stroll along Strøget, especially between Nytorv and Højbro Plads, which in the late afternoon and evening is a bit like an impromptu three-ring circus with musicians, magicians, jugglers and other street performers. Most of Denmarks's major publishing houses are based in Copenhagen. These include the book publishers Gyldendal and Akademisk Forlag and newspaper publishers Berlingske and Politiken (the latter also publishing books). Many of the most important contributors to Danish literature such as Hans Christian Andersen (1805–1875) with his fairy tales, the philosopher Søren Kierkegaard (1813–1855) and playwright Ludvig Holberg (1684–1754) spent much of their lives in Copenhagen. Novels set in Copenhagen include "Baby" (1973) by Kirsten Thorup, "The Copenhagen Connection" (1982) by Barbara Mertz, "Number the Stars" (1989) by Lois Lowry, "Miss Smilla's Feeling for Snow" (1992) and "Borderliners" (1993) by Peter Høeg, "Music and Silence" (1999) by Rose Tremain, "The Danish Girl" (2000) by David Ebershoff, and "Sharpe's Prey" (2001) by Bernard Cornwell. Michael Frayn's 1998 play "Copenhagen" about the meeting between the physicists Niels Bohr and Werner Heisenberg in 1941 is also set in the city. On 15–18 August 1973, an oral literature conference took place in Copenhagen as part of the 9th International Congress of Anthropological and Ethnological Sciences. The Royal Library, belonging to the University of Copenhagen, is the largest library in the Nordic countries with an almost complete collection of all printed Danish books since 1482. Founded in 1648, the Royal Library is located at four sites in the city, the main one being on the Slotsholmen waterfront. Copenhagen's public library network has over 20 outlets, the largest being the Central Library ("Københavns Hovedbibliotek") on Krystalgade in the inner city. Copenhagen has a wide selection of art museums and galleries displaying both historic works and more modern contributions. They include Statens Museum for Kunst, i.e. the Danish national art gallery, in the Østre Anlæg park, and the adjacent Hirschsprung Collection specialising in the 19th and early 20th century. Kunsthal Charlottenborg in the city centre exhibits national and international contemporary art. Den Frie Udstilling near the Østerport Station exhibits paintings created and selected by contemporary artists themselves rather than by the official authorities. The Arken Museum of Modern Art is located in southwestern Ishøj. Among artists who have painted scenes of Copenhagen are Martinus Rørbye (1803–1848), Christen Købke (1810–1848) and the prolific Paul Gustav Fischer (1860–1934). A number of notable sculptures can be seen in the city. In addition to "The Little Mermaid" on the waterfront, there are two historic equestrian statues in the city centre: Jacques Saly's "Frederik V on Horseback" (1771) in Amalienborg Square and the statue of Christian V on Kongens Nytorv created by Abraham-César Lamoureux in 1688 who was inspired by the statue of Louis XIII in Paris. Rosenborg Castle Gardens contains several sculptures and monuments including August Saabye's Hans Christian Andersen, Aksel Hansen's Echo, and Vilhelm Bissen's Dowager Queen Caroline Amalie. Copenhagen is believed to have invented the photomarathon photography competition, which has been held in the City each year since 1989. , Copenhagen has 15 Michelin-starred restaurants, the most of any Scandinavian city. The city is increasingly recognized internationally as a gourmet destination. These include Den Røde Cottage, Formel B Restaurant, Grønbech & Churchill, Søllerød Kro, Kadeau, Kiin Kiin (Denmark's first Michelin-starred Asian gourmet restaurant), the French restaurant Kong Hans Kælder, Relæ, Restaurant AOC, Noma (short for Danish: "no"rdisk "ma"d, English: Nordic food) with two Stars and Geranium with three. Noma, was ranked as the Best Restaurant in the World by "Restaurant" in 2010, 2011, 2012, and again in 2014, sparking interest in the New Nordic Cuisine. Apart from the selection of upmarket restaurants, Copenhagen offers a great variety of Danish, ethnic and experimental restaurants. It is possible to find modest eateries serving open sandwiches, known as smørrebrød – a traditional, Danish lunch dish; however, most restaurants serve international dishes. Danish pastry can be sampled from any of numerous bakeries found in all parts of the city. The Copenhagen Baker's Association dates back to the 1290s and Denmark's oldest confectioner's shop still operating, "Conditori La Glace", was founded in 1870 in Skoubogade by Nicolaus Henningsen, a trained master baker from Flensburg. Copenhagen has long been associated with beer. Carlsberg beer has been brewed at the brewery's premises on the border between the Vesterbro and Valby districts since 1847 and has long been almost synonymous with Danish beer production. However, recent years have seen an explosive growth in the number of microbreweries so that Denmark today has more than 100 breweries, many of which are located in Copenhagen. Some like "Nørrebro Bryghus" also act as brewpubs where it is also possible to eat on the premises. Copenhagen has one of the highest number of restaurants and bars per capita in the world. The nightclubs and bars stay open until 5 or 6 in the morning, some even longer. Denmark has a very liberal alcohol culture and a strong tradition for beer breweries, although binge drinking is frowned upon and the Danish Police take driving under the influence very seriously. Inner city areas such as Istedgade and Enghave Plads in Vesterbro, Sankt Hans Torv in Nørrebro and certain places in Frederiksberg are especially noted for their nightlife. Notable nightclubs include Bakken Kbh, ARCH (previously ZEN), Jolene, The Jane, Chateau Motel, KB3, At Dolores (previously Sunday Club), Rust, Vega Nightclub, Culture Box and Gefährlich, which also serves as a bar, café, restaurant, and art gallery. Copenhagen has several recurring community festivals, mainly in the summer. Copenhagen Carnival has taken place every year since 1982 during the Whitsun Holiday in Fælledparken and around the city with the participation of 120 bands, 2,000 dancers and 100,000 spectators. Since 2010, the old B&W Shipyard at Refshaleøen in the harbour has been the location for Copenhell, a heavy metal rock music festival. Copenhagen Pride is a gay pride festival taking place every year in August. The Pride has a series of different activities all over Copenhagen, but it is at the City Hall Square that most of the celebration takes place. During the Pride the square is renamed Pride Square. Copenhagen Distortion has emerged to be one of the biggest street festivals in Europe with 100,000 people joining to parties in the beginning of June every year. Copenhagen has the two oldest amusement parks in the world. Dyrehavsbakken, a fair-ground and pleasure-park established in 1583, is located in Klampenborg just north of Copenhagen in a forested area known as Dyrehaven. Created as an amusement park complete with rides, games and restaurants by Christian IV, it is the oldest surviving amusement park in the world. Pierrot (), a nitwit dressed in white with a scarlet grin wearing a boat-like hat while entertaining children, remains one of the park's key attractions. In Danish, Dyrehavsbakken is often abbreviated as "Bakken". There is no entrance fee to pay and Klampenborg Station on the C-line, is situated nearby. The Tivoli Gardens is an amusement park and pleasure garden located in central Copenhagen between the City Hall Square and the Central Station. It opened in 1843, making it the second oldest amusement park in the world. Among its rides are the oldest still operating rollercoaster "Rutschebanen" from 1915 and the oldest ferris wheel still in use, opened in 1943. Tivoli Gardens also serves as a venue for various performing arts and as an active part of the cultural scene in Copenhagen. Copenhagen has over 94,000 students enrolled in its largest universities and institutions: University of Copenhagen (38,867 students), Copenhagen Business School (19,999 students), Metropolitan University College and University College Capital (10,000 students each), Technical University of Denmark (7,000 students), KEA (c. 4,500 students), IT University of Copenhagen (2,000 students) and Aalborg University – Copenhagen (2,300 students). The University of Copenhagen is Denmark's oldest university founded in 1479. It attracts some 1,500 international and exchange students every year. The Academic Ranking of World Universities placed it 30th in the world in 2016. The Technical University of Denmark is located in Lyngby in the northern outskirts of Copenhagen. In 2013, it was ranked as one of the leading technical universities in Northern Europe. The IT University is Denmark's youngest university, a mono-faculty institution focusing on technical, societal and business aspects of information technology. The Danish Academy of Fine Arts has provided education in the arts for more than 250 years. It includes the historic School of Visual Arts, and has in later years come to include a School of Architecture, a School of Design and a School of Conservation. Copenhagen Business School (CBS) is an EQUIS-accredited business school located in Frederiksberg. There are also branches of both University College Capital and Metropolitan University College inside and outside Copenhagen. The city has a variety of sporting teams. The major football teams are the historically successful FC København and Brøndby. FC København plays at Parken in Østerbro. Formed in 1992, it is a merger of two older Copenhagen clubs, B 1903 (from the inner suburb Gentofte) and KB (from Frederiksberg). Brøndby plays at Brøndby Stadion in the inner suburb of Brøndbyvester. BK Frem is based in the southern part of Copenhagen (Sydhavnen, Valby). Other teams are FC Nordsjælland (from suburban Farum), Fremad Amager, B93, AB, Lyngby and Hvidovre IF. Copenhagen has several handball teams—a sport which is particularly popular in Denmark. Of clubs playing in the "highest" leagues, there are Ajax, Ydun, and HIK (Hellerup). The København Håndbold women's club has recently been established. Copenhagen also has ice hockey teams, of which three play in the top league, Rødovre Mighty Bulls, Herlev Eagles and Hvidovre Ligahockey all inner suburban clubs. Copenhagen Ice Skating Club founded in 1869 is the oldest ice hockey team in Denmark but is no longer in the top league. Rugby union is also played in the Danish capital with teams such as CSR-Nanok, Copenhagen Business School Sport Rugby, Frederiksberg RK, Exiles RUFC and Rugbyklubben Speed. Rugby league is now played in Copenhagen, with the national team playing out of Gentofte Stadion. The Danish Australian Football League, based in Copenhagen is the largest Australian rules football competition outside of the English-speaking world. Copenhagen Marathon, Copenhagen's annual marathon event, was established in 1980. Round Christiansborg Open Water Swim Race is a open water swimming competition taking place each year in late August. This amateur event is combined with a Danish championship. In 2009 the event included a FINA World Cup competition in the morning. Copenhagen hosted the 2011 UCI Road World Championships in September 2011, taking advantage of its bicycle-friendly infrastructure. It was the first time that Denmark had hosted the event since 1956, when it was also held in Copenhagen. The greater Copenhagen area has a very well established transportation infrastructure making it a hub in Northern Europe. Copenhagen Airport, opened in 1925, is Scandinavia's largest airport, located in Kastrup on the island of Amager. It is connected to the city centre by metro and main line railway services. October 2013 was a record month with 2.2 million passengers, and November 2013 figures reveal that the number of passengers is increasing by some 3% annually, about 50% more than the European average. Copenhagen has an extensive road network including motorways connecting the city to other parts of Denmark and to Sweden over the Øresund Bridge. The car is still the most popular form of transport within the city itself, representing two-thirds of all distances travelled. This can however lead to serious congestion in rush hour traffic. The Øresund train links Copenhagen with Malmö 24 hours a day, 7 days a week. Copenhagen is also served by a daily ferry connection to Oslo in Norway. In 2012, Copenhagen Harbour handled 372 cruise ships and 840,000 passengers. The Copenhagen S-Train, Copenhagen Metro and the regional train networks are used by about half of the city's passengers, the remainder using bus services. Nørreport Station near the city centre serves passengers travelling by main-line rail, S-train, regional train, metro and bus. Some 750,000 passengers make use of public transport facilities every day. Copenhagen Central Station is the hub of the DSB railway network serving Denmark and international destinations. The Copenhagen Metro expanded radically with the opening of the City Circle Line (M3) on September 29, 2019.
https://en.wikipedia.org/wiki?curid=5166
Combinatorics Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics, from evolutionary biology to computer science, etc. To fully understand the scope of combinatorics requires a great deal of further amplification, the details of which are not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting. Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an "ad hoc" solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a "". Basic combinatorial concepts and enumerative results appeared throughout the ancient world. In the 6th century BCE, ancient Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1 possibilities. Greek historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schröder–Hipparchus numbers. In the "Ostomachion", Archimedes (3rd century BCE) considers a tiling puzzle. In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra (c. 850) provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra (c. 1140) established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321. The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics. Graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject. In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory and has connections with statistical mechanics. Graphs are fundamental objects in combinatorics. Considerations of graph theory range from enumeration (e.g., the number of graphs on "n" vertices with "k" edges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations (e.g., given a graph "G" and two numbers "x" and "y", does the Tutte polynomial "T""G"("x","y") have a combinatorial interpretation?). Although there are very strong connections between graph theory and combinatorics, they are sometimes thought of as separate subjects. While combinatorial methods apply to many graph theory problems, the two disciplines are generally used to seek solutions to different types of problems. Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Block designs are combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as in Kirkman's schoolgirl problem proposed in 1850. The solution of the problem is a special case of a Steiner system, which systems play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics. Finite geometry is the study of geometric systems having only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane, real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples for design theory. It should not be confused with discrete geometry (combinatorial geometry). Order theory is the study of partially ordered sets, both finite and infinite. Various examples of partial orders appear in algebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders include lattices and Boolean algebras. Matroid theory abstracts part of geometry. It studies the properties of sets (usually, finite sets) of vectors in a vector space that do not depend on the particular coefficients in a linear dependence relation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced by Hassler Whitney and studied as a part of order theory. It is now an independent field of study with a number of connections with other parts of combinatorics. Extremal combinatorics studies extremal questions on set systems. The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largest triangle-free graph on "2n" vertices is a complete bipartite graph "Kn,n". Often it is too hard even to find the extremal answer "f"("n") exactly and one can only give an asymptotic estimate. Ramsey theory is another part of extremal combinatorics. It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle. In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find), simply by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as "the" probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time. Often associated with Paul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. However, with the growth of applications to analyze algorithms in computer science, as well as classical probability, additive number theory, and probabilistic number theory, the area recently grew to become an independent field of combinatorics. Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. Algebraic combinatorics is continuously expanding its scope, in both topics and techniques, and can be seen as the area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Combinatorics on words deals with formal languages. It arose independently within several branches of mathematics, including number theory, group theory and probability. It has applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory, and linguistics. While many applications are new, the classical Chomsky–Schützenberger hierarchy of classes of formal grammars is perhaps the best-known result in the field. Geometric combinatorics is related to convex and discrete geometry, in particular polyhedral combinatorics. It asks, for example, how many faces of each dimension a convex polytope can have. Metric properties of polytopes play an important role as well, e.g. the Cauchy theorem on the rigidity of convex polytopes. Special polytopes are also considered, such as permutohedra, associahedra and Birkhoff polytopes. Combinatorial geometry is an old fashioned name for discrete geometry. Combinatorial analogs of concepts and methods in topology are used to study graph coloring, fair division, partitions, partially ordered sets, decision trees, necklace problems and discrete Morse theory. It should not be confused with combinatorial topology which is an older name for algebraic topology. Arithmetic combinatorics arose out of the interplay between number theory, combinatorics, ergodic theory, and harmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive number theory (sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is the ergodic theory of dynamical systems. Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory, an area of mathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Gian-Carlo Rota used the name "continuous combinatorics" to describe geometric probability, since there are many analogies between "counting" and "measure". Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory. Coding theory started as a part of design theory with early combinatorial constructions of error-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part of information theory. Discrete geometry (also called combinatorial geometry) also began as a part of combinatorics, with early results on convex polytopes and kissing numbers. With the emergence of applications of discrete geometry to computational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry. Combinatorial aspects of dynamical systems is another emerging field. Here dynamical systems can be defined on combinatorial objects. See for example graph dynamical system. There are increasing interactions between combinatorics and physics, particularly statistical physics. Examples include an exact solution of the Ising model, and a connection between the Potts model on one hand, and the chromatic and Tutte polynomials on the other hand.
https://en.wikipedia.org/wiki?curid=5170
Calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while integral calculus concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Today, calculus has widespread uses in science, engineering, and economics. In mathematics education, "calculus" denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word "calculus" (plural "calculi") is a Latin word, meaning originally "small pebble" (this meaning is kept in medicine). Because such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. It is therefore used for naming specific methods of calculation and related theories, such as propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus. Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it appeared in ancient Greece, then in China and the Middle East, and still later again in medieval Europe and in India. The ancient period introduced some of the ideas that led to integral calculus, but does not seem to have developed these ideas in a rigorous and systematic way. Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (13th dynasty,  BC); but the formulas are simple instructions, with no indication as to method, and some of them lack major components. From the age of Greek mathematics, Eudoxus ( BC) used the method of exhaustion, which foreshadows the concept of the limit, to calculate areas and volumes, while Archimedes ( BC) developed this idea further, inventing heuristics which resemble the methods of integral calculus. The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method that would later be called Cavalieri's principle to find the volume of a sphere. In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen ( ) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics thereby stated components of calculus. A complete theory encompassing these components is now well known in the Western world as the "Taylor series" or "infinite series approximations". However, they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today". In Europe, the foundational work was a treatise written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in "The Method", but this treatise is believed to have been lost in the 13th century, and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving the second fundamental theorem of calculus around 1670. The product rule and chain rule, the notions of higher derivatives and Taylor series, and of analytic functions were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his "Principia Mathematica" (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable. These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the notation used in calculus today. The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, second and higher derivatives, and the notion of an approximating polynomial series. By Newton's time, the fundamental theorem of calculus was known. When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his "Method of Fluxions"), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions". Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi. In calculus, "foundations" refers to the rigorous development of the subject from axioms and definitions. In early calculus the use of infinitesimal quantities was thought unrigorous, and was fiercely criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book "The Analyst" in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today. Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's "Cours d'Analyse", we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can actually validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to Euclidean space and the complex plane. In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever. Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher power infinitesimals during derivations. While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Isaac Newton and Gottfried Wilhelm Leibniz built on the work of earlier mathematicians to introduce its basic principles. The development of calculus was built on earlier concepts of instantaneous motion and area underneath curves. Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure. More advanced applications include power series and Fourier series. Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes. Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols formula_1 and formula_2 were taken to be infinitesimal, and the derivative formula_3 was simply their ratio. The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. However, the concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the value of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior in the context of the real number system. In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by very small numbers, and the infinitely small behavior of the function is found by taking the limiting behavior for smaller and smaller numbers. Limits were thought to provide a more rigorous foundation for calculus, and for this reason they became the standard approach during the twentieth century. Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called "differentiation". Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the "derivative function" or just the "derivative" of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by deriving the squaring function turns out to be the doubling function. In more explicit terms the "doubling function" may be denoted by and the "squaring function" by . The "derivative" now takes the function , defined by the expression "", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function , as will turn out. The most common symbol for a derivative is an apostrophe-like mark called prime. Thus, the derivative of a function called is denoted by , pronounced "f prime". For instance, if is the squaring function, then is its derivative (the doubling function from above). This notation is known as Lagrange's notation. If the input of the function represents time, then the derivative represents change with respect to time. For example, if is a function that takes a time as input and gives the position of a ball at that time as output, then the derivative of is how the position is changing in time, that is, it is the velocity of the ball. If a function is linear (that is, if the graph of the function is a straight line), then the function can be written as , where is the independent variable, is the dependent variable, is the "y"-intercept, and: This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in divided by the change in varies. Derivatives give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let be a function, and fix a point in the domain of . is a point on the graph of the function. If is a number close to zero, then is a number close to . Therefore, is close to . The slope between these two points is This expression is called a "difference quotient". A line through two points on a curve is called a "secant line", so is the slope of the secant line between and . The secant line is only an approximation to the behavior of the function at the point because it does not account for what happens between and . It is not possible to discover the behavior at by setting to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as tends to zero, meaning that it considers the behavior of for all small values of and extracts a consistent value for the case when equals zero: Geometrically, the derivative is the slope of the tangent line to the graph of at . The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function . Here is a particular example, the derivative of the squaring function at the input 3. Let be the squaring function. The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the "derivative function" of the squaring function, or just the "derivative" of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function. A common notation, introduced by Leibniz, for the derivative in the example above is In an approach based on limits, the symbol is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, being the infinitesimally small change in caused by an infinitesimally small change applied to . We can also think of as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example: In this usage, the in the denominator is read as "with respect to ". Another example of correct notation could be: formula_10 Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like and as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative. "Integral calculus" is the study of the definitions, properties, and applications of two related concepts, the "indefinite integral" and the "definite integral". The process of finding the value of an integral is called "integration". In technical language, integral calculus studies two related linear operators. The "indefinite integral", also known as the "antiderivative", is the inverse operation to the derivative. is an indefinite integral of when is a derivative of . (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The "definite integral" inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum. A motivating example is the distances traveled in a given time. If the speed is constant, only multiplication is needed, but if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled. When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to "any" irregularly shaped region exhibiting a fluctuating velocity over a given time period. If in the diagram on the right represents speed as it varies over time, the distance traveled (between the times represented by and ) is the area of the shaded region . To approximate that area, an intuitive method would be to divide up the distance between and into a number of equal segments, the length of each segment represented by the symbol . For each small segment, we can choose one value of the function . Call that value . Then the area of the rectangle with base and height gives the distance (time multiplied by speed ) traveled in that segment. Associated with each segment is the average value of the function above it, . The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for will give more rectangles and in most cases a better approximation, but for an exact answer we need to take a limit as approaches zero. The symbol of integration is formula_12, an elongated "S" (the "S" stands for "sum"). The definite integral is written as: and is read "the integral from "a" to "b" of "f"-of-"x" with respect to "x"." The Leibniz notation is intended to suggest dividing the area under the curve into an infinite number of rectangles, so that their width becomes the infinitesimally small . In a formulation of the calculus based on limits, the notation is to be understood as an operator that takes a function as an input and gives a number, the area, as an output. The terminating differential, , is not a number, and is not being multiplied by , although, serving as a reminder of the limit definition, it can be treated as such in symbolic manipulations of the integral. Formally, the differential indicates the variable over which the function is integrated and serves as a closing bracket for the integration operator. The indefinite integral, or antiderivative, is written: Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is actually a family of functions differing only by a constant. Since the derivative of the function , where is any constant, is , the antiderivative of the latter is given by: The unspecified constant present in the indefinite integral or antiderivative is known as the constant of integration. The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration. The fundamental theorem of calculus states: If a function is continuous on the interval and if is a function whose derivative is on the interval , then Furthermore, for every in the interval , This realization, made by both Newton and Leibniz, who based their results on earlier work by Isaac Barrow, was key to the proliferation of analytic results after their work became known. The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulas for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives, and are ubiquitous in the sciences. Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, as well as the total energy of an object within a conservative field can be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the derivative saying "The" change "of momentum of a body is equal to the resultant force acting on the body and is in the same direction." Commonly expressed today as Force = Mass × acceleration, it implies differential calculus because acceleration is the time derivative of velocity or second time derivative of trajectory or spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path. Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or it can be used in probability theory to determine the probability of a continuous random variable from an assumed density function. In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Green's Theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. Discrete Green's Theorem, which gives the relationship between a double integral of a function around a simple closed rectangular curve "C" and a linear combination of the antiderivative's values at corner points along the edge of the curve, allows fast calculation of sums of values in rectangular domains. For example, it can be used to efficiently calculate sums of rectangular domains in images, in order to rapidly extract features and detect object; another algorithm that could be used is the summed area table. In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies. In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue. Calculus is also used to find approximate solutions to equations; in practice it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero gravity environments. Over the years, many reformulations of calculus have been investigated for different purposes. Imprecise calculations with infinitesimals were widely replaced with the rigorous (ε, δ)-definition of limit starting in the 1870s. Meanwhile, calculations with infinitesimals persisted and often led to correct results. This led Abraham Robinson to investigate if it were possible to develop a number system with infinitesimal quantities over which the theorems of calculus were still valid. In 1960, building upon the work of Edwin Hewitt and Jerzy Łoś, he succeeded in developing non-standard analysis. The theory of non-standard analysis is rich enough to be applied in many branches of mathematics. As such, books and articles dedicated solely to the traditional theorems of calculus often go by the title non-standard calculus. This is another reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold in this formulation. Constructive mathematics is a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. As such constructive mathematics also rejects the law of excluded middle. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
https://en.wikipedia.org/wiki?curid=5176
Communication Communication (from Latin "communicare", meaning "to share") is the act of conveying meanings from one entity or group to another through the use of mutually understood signs, symbols, and semiotic rules. The main steps inherent to all communication are: The scientific study of communication can be divided into: The channel of communication can be visual, auditory, tactile/haptic (e.g. Braille or other physical means), olfactory, electromagnetic, or biochemical. Human communication is unique for its extensive use of abstract language. Development of civilization has been closely linked with progress in telecommunication. Nonverbal communication describes the processes of conveying a type of information in a form of non-linguistic representations. Examples of nonverbal communication include haptic communication, chronemic communication, gestures, body language, facial expressions, eye contact etc. Nonverbal communication also relates to the intent of a message. Examples of intent are voluntary, intentional movements like shaking a hand or winking, as well as involuntary, such as sweating. Speech also contains nonverbal elements known as paralanguage, e.g. rhythm, intonation, tempo, and stress. It affects communication most at the subconscious level and establishes trust. Likewise, written texts include nonverbal elements such as handwriting style, the spatial arrangement of words and the use of emoticons to convey emotion. Nonverbal communication demonstrates one of Paul Watzlawick's laws: you cannot not communicate. Once proximity has formed awareness, living creatures begin interpreting any signals received. Some of the functions of nonverbal communication in humans are to complement and illustrate, to reinforce and emphasize, to replace and substitute, to control and regulate, and to contradict the denotative message. Nonverbal cues are heavily relied on to express communication and to interpret others' communication and can replace or substitute verbal messages. However, non-verbal communication is ambiguous. When verbal messages contradict non-verbal messages, observation of non-verbal behaviour is relied on to judge another's attitudes and feelings, rather than assuming the truth of the verbal message alone. There are several reasons as to why non-verbal communication plays a vital role in communication: "Non-verbal communication is omnipresent." They are included in every single communication act. To have total communication, all non-verbal channels such as the body, face, voice, appearance, touch, distance, timing, and other environmental forces must be engaged during face-to-face interaction. Written communication can also have non-verbal attributes. E-mails, web chats, and the social media have options to change text font colours, stationary, add emoticons, capitalization, and pictures in order to capture non-verbal cues into a verbal medium. "Non-verbal behaviours are multifunctional." Many different non-verbal channels are engaged at the same time in communication acts and allow the chance for simultaneous messages to be sent and received. "Non-verbal behaviours may form a universal language system." Smiling, crying, pointing, caressing, and glaring are non-verbal behaviours that are used and understood by people regardless of nationality. Such non-verbal signals allow the most basic form of communication when verbal communication is not effective due to language barriers. Verbal communication is the spoken or written conveyance of a message. Human language can be defined as a system of symbols (sometimes known as lexemes) and the grammars (rules) by which the symbols are manipulated. The word "language" also refers to common properties of languages. Language learning normally occurs most intensively during human childhood. Most of the large number of human languages use patterns of sound or gesture for symbols which enable communication with others around them. Languages tend to share certain properties, although there are exceptions. There is no defined line between a language and a dialect. Constructed languages such as Esperanto, programming languages, and various mathematical formalisms are not necessarily restricted to the properties shared by human languages. As previously mentioned, language can be characterized as symbolic. Charles Ogden and I.A Richards developed The Triangle of Meaning model to explain the symbol (the relationship between a word), the referent (the thing it describes), and the meaning (the thought associated with the word and the thing). The properties of language are governed by rules. Language follows phonological rules (sounds that appear in a language), syntactic rules (arrangement of words and punctuation in a sentence), semantic rules (the agreed upon meaning of words), and pragmatic rules (meaning derived upon context). The meanings that are attached to words can be literal, or otherwise known as denotative; relating to the topic being discussed, or, the meanings take context and relationships into account, otherwise known as connotative; relating to the feelings, history, and power dynamics of the communicators. Contrary to popular belief, signed languages of the world (e.g., American Sign Language) are considered to be verbal communication because their sign vocabulary, grammar, and other linguistic structures abide by all the necessary classifications as spoken languages. There are however, nonverbal elements to signed languages, such as the speed, intensity, and size of signs that are made. A signer might sign "yes" in response to a question, or they might sign a sarcastic-large slow yes to convey a different nonverbal meaning. The sign yes is the verbal message while the other movements add nonverbal meaning to the message. Over time the forms of and ideas about communication have evolved through the continuing progression of technology. Advances include communications psychology and media psychology, an emerging field of study. The progression of written communication can be divided into three "information communication revolutions": Communication is thus a process by which meaning is assigned and conveyed in an attempt to create shared understanding. Gregory Bateson called it "the replication of tautologies in the universe. This process, which requires a vast repertoire of skills in interpersonal processing, listening, observing, speaking, questioning, analyzing, gestures, and evaluating enables collaboration and cooperation. Business communication is used for a wide variety of activities including, but not limited to: strategic communications planning, media relations, internal communications, public relations (which can include social media, broadcast and written communications, and more), brand management, reputation management, speech-writing, customer-client relations, and internal/employee communications. Companies with limited resources may choose to engage in only a few of these activities, while larger organizations may employ a full spectrum of communications. Since it is relatively difficult to develop such a broad range of skills, communications professionals often specialize in one or two of these areas but usually have at least a working knowledge of most of them. By far, the most important qualifications communications professionals must possess are excellent writing ability, good 'people' skills, and the capacity to think critically and strategically. Business communication could also refer to the style of communication within a given corporate entity, i.e. email conversation styles, or internal communication styles. Communication is one of the most relevant tools in political strategies, including persuasion and propaganda. In mass media research and online media research, the effort of the strategist is that of getting a precise decoding, avoiding "message reactance", that is, message refusal. The reaction to a message is referred also in terms of approach to a message, as follows: Holistic approaches are used by communication campaign leaders and communication strategists in order to examine all the options, "actors" and channels that can generate change in the semiotic landscape, that is, change in perceptions, change in credibility, change in the "memetic background", change in the image of movements, of candidates, players and managers as perceived by key influencers that can have a role in generating the desired "end-state". The modern political communication field is highly influenced by the framework and practices of "information operations" doctrines that derive their nature from strategic and military studies. According to this view, what is really relevant is the concept of acting on the Information Environment. The information environment is the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions, which continuously interact with individuals, organizations, and systems. These dimensions are known as physical, informational, and cognitive. Family communication is the study of the communication perspective in a broadly defined family, with intimacy and trusting relationship. The main goal of family communication is to understand the interactions of family and the pattern of behaviors of family members in different circumstances. Open and honest communication creates an atmosphere that allows family members to express their differences as well as love and admiration for one another. It also helps to understand the feelings of one another. Family communication study looks at topics such as family rules, family roles or family dialectics and how those factors could affect the communication between family members. Researchers develop theories to understand communication behaviors. Family communication study also digs deep into certain time periods of family life such as marriage, parenthood or divorce and how communication stands in those situations. It is important for family members to understand communication as a trusted way which leads to a well constructed family. In simple terms, interpersonal communication is the communication between one person and another (or others). It is often referred to as face-to-face communication between two (or more) people. Both verbal and nonverbal communication, or body language, play a part in how one person understands another. In verbal interpersonal communication there are two types of messages being sent: a content message and a relational message. Content messages are messages about the topic at hand and relational messages are messages about the relationship itself. This means that relational messages come across in "how" one says something and it demonstrates a person's feelings, whether positive or negative, towards the individual they are talking to, indicating not only how they feel about the topic at hand, but also how they feel about their relationship with the other individual. There are many different aspects of interpersonal communication including: Barriers to effective communication can retard or distort the message or intention of the message being conveyed. This may result in failure of the communication process or cause an effect that is undesirable. These include filtering, selective perception, information overload, emotions, language, silence, communication apprehension, gender differences and political correctness. This also includes a lack of expressing "knowledge-appropriate" communication, which occurs when a person uses ambiguous or complex legal words, medical jargon, or descriptions of a situation or environment that is not understood by the recipient. Cultural differences exist within countries (tribal/regional differences, dialects etc.), between religious groups and in organisations or at an organisational level – where companies, teams and units may have different expectations, norms and idiolects. Families and family groups may also experience the effect of cultural barriers to communication within and between different family members or groups. For example: words, colours and symbols have different meanings in different cultures. In most parts of the world, nodding your head means agreement, shaking your head means no, except in some parts of the world. Communication to a great extent is influenced by culture and cultural variables. Understanding "cultural aspects of communication" refers to having knowledge of different cultures in order to communicate effectively with cross culture people. Cultural aspects of communication are of great relevance in today's world which is now a global village, thanks to globalisation. Cultural aspects of communication are the cultural differences which influences communication across borders. Impact of cultural differences on communication components are explained below: So in order to have an effective communication across the world it is desirable to have a knowledge of cultural variables effecting communication. According to Michael Walsh and Ghil'ad Zuckermann, Western conversational interaction is typically "dyadic", between two particular people, where eye contact is important and the speaker controls the interaction; and "contained" in a relatively short, defined time frame. However, traditional Aboriginal conversational interaction is "communal", broadcast to many people, eye contact is not important, the listener controls the interaction; and "continuous", spread over a longer, indefinite time frame. Every information exchange between living organisms — i.e. transmission of signals that involve a living sender and receiver can be considered a form of communication; and even primitive creatures such as corals are competent to communicate. Nonhuman communication also include cell signaling, cellular communication, and chemical transmissions between primitive organisms like bacteria and within the plant and fungal kingdoms. The broad field of animal communication encompasses most of the issues in ethology. Animal communication can be defined as any behavior of one animal that affects the current or future behavior of another animal. The study of animal communication, called "zoo semiotics" (distinguishable from anthroposemiotics, the study of human communication) has played an important part in the development of ethology, sociobiology, and the study of animal cognition. Animal communication, and indeed the understanding of the animal world in general, is a rapidly growing field, and even in the 21st century so far, a great share of prior understanding related to diverse fields such as personal symbolic name use, animal emotions, animal culture and learning, and even sexual conduct, long thought to be well understood, has been revolutionized. Communication is observed within the plant organism, i.e. within plant cells and between plant cells, between plants of the same or related species, and between plants and non-plant organisms, especially in the root zone. Plant roots communicate with rhizome bacteria, fungi, and insects within the soil. Recent research has shown that most of the microorganism plant communication processes are neuron-like. Plants also communicate via volatiles when exposed to herbivory attack behavior, thus warning neighboring plants. In parallel they produce other volatiles to attract parasites which attack these herbivores. Fungi communicate to coordinate and organize their growth and development such as the formation of Marcelia and fruiting bodies. Fungi communicate with their own and related species as well as with non fungal organisms in a great variety of symbiotic interactions, especially with bacteria, unicellular eukaryote, plants and insects through biochemicals of biotic origin. The biochemicals trigger the fungal organism to react in a specific manner, while if the same chemical molecules are not part of biotic messages, they do not trigger the fungal organism to react. This implies that fungal organisms can differentiate between molecules taking part in biotic messages and similar molecules being irrelevant in the situation. So far five different primary signalling molecules are known to coordinate different behavioral patterns such as filamentation, mating, growth, and pathogenicity. Behavioral coordination and production of signaling substances is achieved through interpretation processes that enables the organism to differ between self or non-self, a biotic indicator, biotic message from similar, related, or non-related species, and even filter out "noise", i.e. similar molecules without biotic content. Communication is not a tool used only by humans, plants and animals, but it is also used by microorganisms like bacteria. The process is called quorum sensing. Through quorum sensing, bacteria can sense the density of cells, and regulate gene expression accordingly. This can be seen in both gram positive and gram negative bacteria. This was first observed by Fuqua "et al." in marine microorganisms like "V. harveyi" and "V. fischeri". The first major model for communication was introduced by Claude Shannon and Warren Weaver for Bell Laboratories in 1949 The original model was designed to mirror the functioning of radio and telephone technologies. Their initial model consisted of three primary parts: sender, channel, and receiver. The sender was the part of a telephone a person spoke into, the channel was the telephone itself, and the receiver was the part of the phone where one could hear the other person. Shannon and Weaver also recognized that often there is static that interferes with one listening to a telephone conversation, which they deemed noise. In a simple model, often referred to as the transmission model or standard view of communication, information or content (e.g. a message in natural language) is sent in some form (as spoken language) from an emitter ("emisor" in the picture)/sender/encoder to a destination/receiver/decoder. This common conception of communication simply views communication as a means of sending and receiving information. The strengths of this model are simplicity, generality, and quantifiability. Claude Shannon and Warren Weaver structured this model based on the following elements: Shannon and Weaver argued that there were three levels of problems for communication within this theory. Daniel Chandler critiques the transmission model by stating: In 1960, David Berlo expanded on Shannon and Weaver's (1949) linear model of communication and created the SMCR Model of Communication. The Sender-Message-Channel-Receiver Model of communication separated the model into clear parts and has been expanded upon by other scholars. Communication is usually described along a few major dimensions: message (what type of things are communicated), source/emisor/sender/encoder (from whom), form (in which form), channel (through which medium), destination/receiver/target/decoder (to whom). Wilbur Schram (1954) also indicated that we should also examine the impact that a message has (both desired and undesired) on the target of the message. Between parties, communication includes acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, in one of the various manners of communication. The form depends on the abilities of the group communicating. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or being, another entity (such as a corporation or group of beings). Communication can be seen as processes of information transmission with three levels of semiotic rules: Therefore, communication is social interaction where at least two interacting agents share a common set of signs and a common set of semiotic rules. This commonly held rule in some sense ignores autocommunication, including intrapersonal communication via diaries or self-talk, both secondary phenomena that followed the primary acquisition of communicative competences within social interactions. In light of these weaknesses, Barnlund (2008) proposed a transactional model of communication. The basic premise of the transactional model of communication is that individuals are simultaneously engaging in the sending and receiving of messages. In a slightly more complex form a sender and a receiver are linked reciprocally. This second attitude of communication, referred to as the constitutive model or constructionist view, focuses on how an individual communicates as the determining factor of the way the message will be interpreted. Communication is viewed as a conduit; a passage in which information travels from one individual to another and this information becomes separate from the communication itself. A particular instance of communication is called a speech act. The sender's personal filters and the receiver's personal filters may vary depending upon different regional traditions, cultures, or gender; which may alter the intended meaning of message contents. In the presence of "communication noise" on the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech act may not achieve the desired effect. One problem with this encode-transmit-receive-decode model is that the processes of encoding and decoding imply that the sender and receiver each possess something that functions as a codebook, and that these two code books are, at the very least, similar if not identical. Although something like code books is implied by the model, they are nowhere represented in the model, which creates many conceptual difficulties. Theories of coregulation describe communication as a creative and dynamic continuous process, rather than a discrete exchange of information. Canadian media scholar Harold Innis had the theory that people use different types of media to communicate and which one they choose to use will offer different possibilities for the shape and durability of society. His famous example of this is using ancient Egypt and looking at the ways they built themselves out of media with very different properties stone and papyrus. Papyrus is what he called 'Space Binding'. it made possible the transmission of written orders across space, empires and enables the waging of distant military campaigns and colonial administration. The other is stone and 'Time Binding', through the construction of temples and the pyramids can sustain their authority generation to generation, through this media they can change and shape communication in their society. In any communication model, noise is interference with the decoding of messages sent over the channel by an encoder. There are many examples of noise: To face communication noise, redundancy and acknowledgement must often be used. Acknowledgements are messages from the addressee informing the originator that his/her communication has been received and is understood. Message repetition and feedback about message received are necessary in the presence of noise to reduce the probability of misunderstanding. The act of disambiguation regards the attempt of reducing noise and wrong interpretations, when the semantic value or meaning of a sign can be subject to noise, or in presence of multiple meanings, which makes the sense-making difficult. Disambiguation attempts to decrease the likelihood of misunderstanding. This is also a fundamental skill in communication processes activated by counselors, psychotherapists, interpreters, and in coaching sessions based on colloquium. In Information Technology, the disambiguation process and the automatic disambiguation of meanings of words and sentences has also been an interest and concern since the earliest days of computer treatment of language. The academic discipline that deals with processes of human communication is communication studies. The discipline encompasses a range of topics, from face-to-face conversation to mass media outlets such as television broadcasting. Communication studies also examines how messages are interpreted through the political, cultural, economic, semiotic, hermeneutic, and social dimensions of their contexts. Statistics, as a quantitative approach to communication science, has also been incorporated into research on communication science in order to help substantiate claims.
https://en.wikipedia.org/wiki?curid=5177
Classics Classics or classical studies is the study of classical antiquity, and in the Western world traditionally refers to the study of Classical Greek and Roman literature in their original languages of Ancient Greek and Latin, respectively. It may also include Greco-Roman philosophy, history, and archaeology as secondary subjects. In Western civilization, the study of the Greek and Roman classics was traditionally considered to be the foundation of the humanities, and study of classics has therefore traditionally been the cornerstone of a typical elite European education. The word "classics" is derived from the Latin adjective "classicus", meaning "belonging to the highest class of citizens". The word was originally used to describe the members of the highest class in ancient Rome. By the 2nd century AD the word was used in literary criticism to describe writers of the highest quality. For example, Aulus Gellius, in his "Attic Nights", contrasts "classicus" and "proletarius" writers. By the 6th century AD, the word had acquired a second meaning, referring to pupils at a school. Thus the two modern meanings of the word, referring both to literature considered to be of the highest quality, and to the standard texts used as part of a curriculum, both derive from Roman use. In the Middle Ages, classics and education were tightly intertwined; according to Jan Ziolkowski, there is no era in history in which the link was tighter. Medieval education taught students to imitate earlier classical models, and Latin continued to be the language of scholarship and culture, despite the increasing difference between literary Latin and the vernacular languages of Europe during the period. While Latin was hugely influential, however, Greek was barely studied, and Greek literature survived almost solely in Latin translation. The works of even major Greek authors such as Hesiod, whose names continued to be known by educated Europeans, were unavailable in the Middle Ages. In the thirteenth century, the English philosopher Roger Bacon wrote that "there are not four men in Latin Christendom who are acquainted with the Greek, Hebrew, and Arabic grammars." Along with the unavailability of Greek authors, there were other differences between the classical canon known today and the works valued in the Middle Ages. Catullus, for instance, was almost entirely unknown in the medieval period. The popularity of different authors also waxed and waned throughout the period: Lucretius, popular during the Carolingian period, was barely read in the twelfth century, while for Quintilian the reverse is true. The Renaissance led to the increasing study of both ancient literature and ancient history, as well as a revival of classical styles of Latin. From the 14th century, first in Italy and then increasingly across Europe, Renaissance Humanism, an intellectual movement that "advocated the study and imitation of classical antiquity", developed. Humanism saw a reform in education in Europe, introducing a wider range of Latin authors as well as bringing back the study of Greek language and literature to Western Europe. This reintroduction was initiated by Petrarch (1304–1374) and Boccaccio (1313–1375) who commissioned a Calabrian scholar to translate the Homeric poems. This humanist educational reform spread from Italy, in Catholic countries as it was adopted by the Jesuits, and in countries that became Protestant such as England, Germany, and the Low Countries, in order to ensure that future clerics were able to study the New Testament in the original language. The late 17th and 18th centuries are the period in Western European literary history which is most associated with the classical tradition, as writers consciously adapted classical models. Classical models were so highly prized that the plays of William Shakespeare were rewritten along neoclassical lines, and these "improved" versions were performed throughout the 18th century. From the beginning of the 18th century, the study of Greek became increasingly important relative to that of Latin. In this period Johann Winckelmann's claims for the superiority of the Greek visual arts influenced a shift in aesthetic judgements, while in the literary sphere, G.E. Lessing "returned Homer to the centre of artistic achievement". In the United Kingdom, the study of Greek in schools began in the late 18th century. The poet Walter Savage Landor claimed to have been one of the first English schoolboys to write in Greek during his time at Rugby School. The 19th century saw the influence of the classical world, and the value of a classical education, decline, especially in the US, where the subject was often criticised for its elitism. By the 19th century, little new literature was still being written in Latin – a practice which had continued as late as the 18th century – and a command of Latin declined in importance. Correspondingly, classical education from the 19th century onwards began to increasingly de-emphasise the importance of the ability to write and speak Latin. In the United Kingdom this process took longer than elsewhere. Composition continued to be the dominant classical skill in England until the 1870s, when new areas within the discipline began to increase in popularity. In the same decade came the first challenges to the requirement of Greek at the universities of Oxford and Cambridge, though it would not be finally abolished for another 50 years. Though the influence of classics as the dominant mode of education in Europe and North America was in decline in the 19th century, the discipline was rapidly evolving in the same period. Classical scholarship was becoming more systematic and scientific, especially with the "new philology" created at the end of the 18th and beginning of the 19th century. Its scope was also broadening: it was during the 19th century that ancient history and classical archaeology began to be seen as part of classics rather than separate disciplines. During the 20th century, the study of classics became less common. In England, for instance, Oxford and Cambridge universities stopped requiring students to have qualifications in Greek in 1920, and in Latin at the end of the 1950s. When the National Curriculum was introduced in England, Wales, and Northern Ireland in 1988, it did not mention the classics. By 2003, only about 10% of state schools in Britain offered any classical subjects to their students at all. In 2016, AQA, the largest exam board for A-Levels and GCSE's in England, Wales and Northern Ireland, announced that it would be scrapping A-Level subjects in Classical Civilization, Archaeology, and Art History. This left just one out of five exam boards in England which still offered Classical Civilization as a subject. The decision was immediately denounced by archaeologists and historians, with Natalie Haynes of the "Guardian" stating that the loss of the A-Level would deprive state school students, 93% of all students, the opportunity to study classics while making it once again the exclusive purview of wealthy private-school students. However, the study of classics has not declined as fast elsewhere in Europe. In 2009, a review of "Meeting the Challenge", a collection of conference papers about the teaching of Latin in Europe, noted that though there is opposition to the teaching of Latin in Italy, it is nonetheless still compulsory in most secondary schools. The same can be said in the case of France or Greece, too. Indeed, Ancient Greek is one of the compulsory subjects in Greek secondary education, whereas in France, Latin is one of the optional subjects that can be chosen in a majority of middle schools and high schools. Ancient Greek is also still being taught, but not as much as Latin. One of the most notable characteristics of the modern study of classics is the diversity of the field. Although traditionally focused on ancient Greece and Rome, the study now encompasses the entire ancient Mediterranean world, thus expanding the studies to Northern Africa as well as parts of the Middle East. Philology is the study of language preserved in written sources; classical philology is thus concerned with understanding any texts from the classical period written in the classical languages of Latin and Greek. The roots of classical philology lie in the Renaissance, as humanist intellectuals attempted to return to the Latin of the classical period, especially of Cicero, and as scholars attempted to produce more accurate editions of ancient texts. Some of the principles of philology still used today were developed during this period, for instance, the observation that if a manuscript could be shown to be a copy of an earlier extant manuscript, then it provides no further evidence of the original text, was made as early as 1489 by Angelo Poliziano. Other philological tools took longer to be developed: the first statement, for instance, of the principle that a more difficult reading should be preferred over a simpler one, was in 1697 by Jean Le Clerc. The modern discipline of classical philology began in Germany at the turn of the nineteenth century. It was during this period that scientific principles of philology began to be put together into a coherent whole, in order to provide a set of rules by which scholars could determine which manuscripts were most accurate. This "new philology", as it was known, centred around the construction of a genealogy of manuscripts, with which a hypothetical common ancestor, closer to the original text than any existing manuscript, could be reconstructed. Classical archaeology is the oldest branch of archaeology, with its roots going back to J.J. Winckelmann's work on Herculaneum in the 1760s. It was not until the last decades of the 19th century, however, that classical archaeology became part of the tradition of Western classical scholarship. It was included as part of Cambridge University's Classical Tripos for the first time after the reforms of the 1880s, though it did not become part of Oxford's Greats until much later. The second half of the 19th century saw Schliemann's excavations of Troy and Mycenae; the first excavations at Olympia and Delos; and Arthur Evans' work in Crete, particularly on Knossos. This period also saw the foundation of important archaeological associations (e.g. the Archaeological Institute of America in 1879), including many foreign archaeological institutes in Athens and Rome (the American School of Classical Studies at Athens in 1881, British School at Athens in 1886, American Academy in Rome in 1895, and British School at Rome in 1900). More recently, classical archaeology has taken little part in the theoretical changes in the rest of the discipline, largely ignoring the popularity of "New Archaeology", which emphasised the development of general laws derived from studying material culture, in the 1960s. New Archaeology is still criticized by traditional minded scholars of classical archaeology despite a wide acceptance of its basic techniques. Some art historians focus their study on the development of art in the classical world. Indeed, the art and architecture of Ancient Rome and Greece is very well regarded and remains at the heart of much of our art today. For example, Ancient Greek architecture gave us the Classical Orders: Doric, Ionic, and Corinthian. The Parthenon is still the architectural symbol of the classical world. Greek sculpture is well known and we know the names of several Ancient Greek artists: for example, Phidias. With philology, archaeology, and art history, scholars seek understanding of the history and culture of a civilisation, through critical study of the extant literary and physical artefacts, in order to compose and establish a continual historic narrative of the Ancient World and its peoples. The task is difficult due to a dearth of physical evidence: for example, Sparta was a leading Greek city-state, yet little evidence of it survives to study, and what is available comes from Athens, Sparta's principal rival; likewise, the Roman Empire destroyed most evidence (cultural artefacts) of earlier, conquered civilizations, such as that of the Etruscans. The English word "philosophy" comes from the Greek word φιλοσοφία, meaning "love of wisdom", probably coined by Pythagoras. Along with the word itself, the discipline of philosophy as we know it today has its roots in ancient Greek thought, and according to Martin West "philosophy as we understand it is a Greek creation". Ancient philosophy was traditionally divided into three branches: logic, physics, and ethics. However, not all of the works of ancient philosophers fit neatly into one of these three branches. For instance, Aristotle's "Rhetoric" and "Poetics" have been traditionally classified in the West as "ethics", but in the Arabic world were grouped with logic; in reality, they do not fit neatly into either category. From the last decade of the eighteenth century, scholars of ancient philosophy began to study the discipline historically. Previously, works on ancient philosophy had been unconcerned with chronological sequence and with reconstructing the reasoning of ancient thinkers; with what Wolfgang-Ranier Mann calls "New Philosophy", this changed. A relatively recent new discipline within the classics is "reception studies", which developed in the 1960s at the University of Konstanz. Reception studies is concerned with how students of classical texts have understood and interpreted them. As such, reception studies is interested in a two-way interaction between reader and text, taking place within a historical context. Though the idea of an "aesthetics of reception" was first put forward by Hans Robert Jauss in 1967, the principles of reception theory go back much earlier than this. As early as 1920, T. S. Eliot wrote that "the past [is] altered by the present as much as the present is directed by the past"; Charles Martindale describes this as a "cardinal principle" for many versions of modern reception theory. Ancient Greece was the civilization belonging to the period of Greek history lasting from the Archaic period, beginning in the eighth century BC, to the Roman conquest of Greece after the Battle of Corinth in 146 BC. The Classical period, during the fifth and fourth centuries BC, has traditionally been considered the height of Greek civilisation. The Classical period of Greek history is generally considered to have begun with the first and second Persian invasions of Greece at the start of the Greco-Persian wars, and to have ended with the death of Alexander the Great. Classical Greek culture had a powerful influence on the Roman Empire, which carried a version of it to many parts of the Mediterranean region and Europe; thus Classical Greece is generally considered to be the seminal culture which provided the foundation of Western civilization. Ancient Greek is the historical stage in the development of the Greek language spanning the Archaic (c. 8th to 6th centuries BC), Classical (c. 5th to 4th centuries BC), and Hellenistic (c. 3rd century BC to 6th century AD) periods of ancient Greece and the ancient world. It is predated in the 2nd millennium BC by Mycenaean Greek. Its Hellenistic phase is known as Koine ("common") or Biblical Greek, and its late period mutates imperceptibly into Medieval Greek. Koine is regarded as a separate historical stage of its own, although in its earlier form it closely resembles Classical Greek. Prior to the Koine period, Greek of the classical and earlier periods included several regional dialects. Ancient Greek was the language of Homer and of classical Athenian historians, playwrights, and philosophers. It has contributed many words to the vocabulary of English and many other European languages, and has been a standard subject of study in Western educational institutions since the Renaissance. Latinized forms of Ancient Greek roots are used in many of the scientific names of species and in other scientific terminology. The earliest surviving works of Greek literature are epic poetry. Homer's "Iliad" and "Odyssey" are the earliest to survive to us today, probably composed in the eighth century BC. These early epics were oral compositions, created without the use of writing. Around the same time that the Homeric epics were composed, the Greek alphabet was introduced; the earliest surviving inscriptions date from around 750 BC. European drama was invented in ancient Greece. Traditionally this was attributed to Thespis, around the middle of the sixth century BC, though the earliest surviving work of Greek drama is Aeschylus' tragedy "The Persians", which dates to 472 BC. Early Greek tragedy was performed by a chorus and two actors, but by the end of Aeschylus' life, a third actor had been introduced, either by him or by Sophocles. The last surviving Greek tragedies are the "Bacchae" of Euripides and Sophocles' Oedipus at Colonus, both from the end of the fifth century BC. Surviving Greek comedy begins later than tragedy; the earliest surviving work, Aristophanes' "Acharnians", comes from 425 BC. However, comedy dates back as early as 486 BC, when the Dionysia added a competition for comedy to the much earlier competition for tragedy. The comedy of the fifth century is known as Old Comedy, and it comes down to us solely in the eleven surviving plays of Aristophanes, along with a few fragments. Sixty years after the end of Aristophanes' career, the next author of comedies to have any substantial body of work survive is Menander, whose style is known as New Comedy. Two historians flourished during Greece's classical age: Herodotus and Thucydides. Herodotus is commonly called the father of history, and his "History" contains the first truly literary use of prose in Western literature. Of the two, Thucydides was the more careful historian. His critical use of sources, inclusion of documents, and laborious research made his History of the Peloponnesian War a significant influence on later generations of historians. The greatest achievement of the 4th century was in philosophy. There were many Greek philosophers, but three names tower above the rest: Socrates, Plato, and Aristotle. These have had a profound influence on Western society. Greek mythology is the body of myths and legends belonging to the ancient Greeks concerning their gods and heroes, the nature of the world, and the origins and significance of their own cult and ritual practices. They were a part of religion in ancient Greece. Modern scholars refer to the myths and study them in an attempt to throw light on the religious and political institutions of Ancient Greece and its civilization, and to gain understanding of the nature of myth-making itself. Greek religion encompassed the collection of beliefs and rituals practiced in ancient Greece in the form of both popular public religion and cult practices. These different groups varied enough for it to be possible to speak of Greek religions or "cults" in the plural, though most of them shared similarities. Also, the Greek religion extended out of Greece and out to neighbouring islands. Many Greek people recognized the major gods and goddesses: Zeus, Poseidon, Hades, Apollo, Artemis, Aphrodite, Ares, Dionysus, Hephaestus, Athena, Hermes, Demeter, Hestia and Hera; though philosophies such as Stoicism and some forms of Platonism used language that seems to posit a transcendent single deity. Different cities often worshipped the same deities, sometimes with epithets that distinguished them and specified their local nature. The earliest surviving philosophy from ancient Greece dates back to the 6th century BC, when according to Aristotle Thales of Miletus was considered to have been the first Greek philosopher. Other influential pre-Socratic philosophers include Pythagoras and Heraclitus. The most famous and significant figures in classical Athenian philosophy, from the 5th to the 3rd centuries BC, are Socrates, his student Plato, and Aristotle, who studied at Plato's Academy before founding his own school, known as the Lyceum. Later Greek schools of philosophy, including the Cynics, Stoics, and Epicureans, continued to be influential after the Roman annexation of Greece, and into the post-Classical world. Greek philosophy dealt with a wide variety of subjects, including political philosophy, ethics, metaphysics, ontology, and logic, as well as disciplines which are not today thought of as part of philosophy, such as biology and rhetoric. The language of ancient Rome was Latin, a member of the Italic family of languages. The earliest surviving inscription in Latin comes from the 7th century BC, on a brooch from Palestrina. Latin from between this point and the early 1st century BC is known as Old Latin. Most surviving Latin literature is Classical Latin, from the 1st century BC to the 2nd century AD. Latin then evolved into Late Latin, in use during the late antique period. Late Latin survived long after the end of classical antiquity, and was finally replaced by written Romance languages around the 9th century AD. Along with literary forms of Latin, there existed various vernacular dialects, generally known as Vulgar Latin, in use throughout antiquity. These are mainly preserved in sources such as graffiti and the Vindolanda tablets. The earliest surviving Latin authors, writing in Old Latin, include the playwrights Plautus and Terence. Much of the best known and most highly thought of Latin literature comes from the classical period, with poets such as Virgil, Horace, and Ovid; historians such as Julius Caesar and Tacitus; orators such as Cicero; and philosophers such as Seneca the Younger and Lucretius. Late Latin authors include many Christian writers such as Lactantius, Tertullian and Ambrose; non-Christian authors, such as the historian Ammianus Marcellinus, are also preserved. According to legend, the city of Rome was founded in 753 BC; in reality, there had been a settlement on the site since around 1000 BC, when the Palatine Hill was settled. The city was originally ruled by kings, first Roman, and then Etruscan – according to Roman tradition, the first Etruscan king of Rome, Tarquinius Priscus, ruled from 616 BC. Over the course of the 6th century BC, the city expanded its influence over the entirety of Latium. Around the end of the 6th century – traditionally in 510 BC – the kings of Rome were driven out, and the city became a republic. Around 387 BC, Rome was sacked by the Gauls following the Battle of the Allia. It soon recovered from this humiliating defeat, however, and in 381 the inhabitants of Tusculum in Latium were made Roman citizens. This was the first time Roman citizenship was extended in this way. Rome went on to expand its area of influence, until by 269 the entirety of the Italian peninsula was under Roman rule. Soon afterwards, in 264, the First Punic War began; it lasted until 241. The Second Punic War began in 218, and by the end of that year, the Carthaginian general Hannibal had invaded Italy. The war saw Rome's worst defeat to that point at Cannae; the largest army Rome had yet put into the field was wiped out, and one of the two consuls leading it was killed. However, Rome continued to fight, annexing much of Spain and eventually defeating Carthage, ending her position as a major power and securing Roman preeminence in the Western Mediterranean. The classical languages of the Ancient Mediterranean world influenced every European language, imparting to each a learned vocabulary of international application. Thus, Latin grew from a highly developed cultural product of the Golden and Silver eras of Latin literature to become the "international lingua franca" in matters diplomatic, scientific, philosophic and religious, until the 17th century. Long before this, Latin had evolved into the Romance languages and Ancient Greek into Modern Greek and its dialects. In the specialised science and technology vocabularies, the influence of Latin and Greek is notable. Ecclesiastical Latin, the Roman Catholic Church's official language, remains a living legacy of the classical world in the contemporary world. Latin had an impact far beyond the classical world. It continued to be the pre-eminent language for serious writings in Europe long after the fall of the Roman empire. The modern Romance languages – such as French, Spanish, and Italian – all derive from Latin. Latin is still seen as a foundational aspect of European culture. The legacy of the classical world is not confined to the influence of classical languages. The Roman empire was taken as a model by later European empires, such as the Spanish and British empires. Classical art has been taken as a model in later periods – medieval Romanesque architecture and Enlightenment-era neoclassical literature were both influenced by classical models, to take but two examples, while Joyce's "Ulysses" is one of the most influential works of twentieth century literature.
https://en.wikipedia.org/wiki?curid=5178
Chemistry Chemistry is the scientific discipline involved with elements and compounds composed of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during a reaction with other substances. In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant chemistry (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the moon (astrophysics), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics). Chemistry addresses topics such as how atoms and molecules interact via chemical bonds to form new chemical compounds. There are four types of chemical bonds: covalent bonds, in which compounds share one or more electron(s); ionic bonds, in which a compound donates one or more electrons to another compound to produce ions (cations and anions); hydrogen bonds; and Van der Waals force bonds. The word "chemistry" comes from "alchemy," which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism and medicine. It is often seen as linked to the quest to turn lead or another common starting material into gold, though in ancient times, the study encompassed many of the questions of modern chemistry being defined as the study of the composition of waters, movement, growth, embodying, disembodying, drawing the spirits from bodies and bonding the spirits within bodies by the early 4th century Greek-Egyptian alchemist Zosimos. An alchemist was called a 'chemist' in popular speech, and later the suffix "-ry" was added to this to describe the art of the chemist as "chemistry". The modern word "alchemy" in turn is derived from the Arabic word "al-kīmīā" (الكیمیاء). In origin, the term is borrowed from the Greek χημία or χημεία. This may have Egyptian origins since "al-kīmīā" is derived from the Greek χημία, which is in turn derived from the word Kemet, which is the ancient name of Egypt in the Egyptian language. Alternately, "al-kīmīā" may derive from χημεία, meaning "cast together". The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory. The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it. A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are: In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances. The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus. The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent). A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol "Z". The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends. A "compound" is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number. A "molecule" is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable. The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules "per se". Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature. A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys. The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). The mole is defined as the number of atoms found in exactly 0.012 kilogram (or 12 grams) of carbon-12, where the carbon-12 atoms are unbound, at rest and in their ground state. The number of entities per mole is known as the Avogadro constant, and is determined empirically to be approximately 6.022 mol−1. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3. In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A "phase" is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the "phase transition", which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. Sometimes the distinction between phases can be continuous instead of having a discrete boundary' in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the "aqueous" phase, which is the state of substances dissolved in aqueous solution (that is, in water). Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology. Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. A chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition. An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed. In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the "duet rule", and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. See diagram on electronic orbitals. In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The "speed" of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor formula_1 – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, formula_2; if it is equal to zero the chemical reaction is said to be at equilibrium. There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions. The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole-dipole interactions. The transfer of energy from one chemical substance to another depends on the "size" of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A "chemical reaction" is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware. Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid-base neutralization and molecular rearrangement are some of the commonly used kinds of chemical reactions. A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events'). An "ion" is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid-base reactions are hydroxide (OH−) and phosphate (PO43−). Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. A substance can often be classified as an acid or a base. There are several different theories which explain acid-base behavior. The simplest is Arrhenius theory, which states that acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid-base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion. A third common theory is Lewis acid-base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. According to this theory, the crucial things being exchanged are charges. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values. Redox ("red"uction-"ox"idation) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase. A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time. Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are: The history of chemistry spans a period from very old times to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze. Chemistry was preceded by its protoscience, alchemy, which is an intuitive but non-scientific approach to understanding the constituents of matter and their interactions. It was unsuccessful in explaining the nature of matter and its transformations, but, by performing experiments and recording the results, alchemists set the stage for modern chemistry. Chemistry as a body of knowledge distinct from alchemy began to emerge when a clear differentiation was made between them by Robert Boyle in his work "The Sceptical Chymist" (1661). While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry is considered to have become an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs. The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection. The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances – a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes. Early civilizations, such as the Egyptians Babylonians, Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but didn't develop a systematic theory. A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his book "De rerum natura" (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be". In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations. The development of the modern scientific method was slow and arduous, but an early scientific method for chemistry began emerging among early Muslim chemists, beginning with the 9th century Perso-Arab chemist Jābir ibn Hayyān (known as "Geber" in Europe), who is sometimes referred to as "the father of chemistry". He introduced a systematic and experimental approach to scientific research based in the laboratory, in contrast to the ancient Greek and Egyptian alchemists whose works were largely allegorical and often unintelligible. He also introduced the alembic (al-anbiq) of Persian encyclopedist Ibn al-Awwam to Europe, chemically analyzed many chemical substances, composed lapidaries, distinguished between alkalis and acids, and manufactured hundreds of drugs. His books strongly influenced the medieval European alchemists and justified their search for the philosopher's stone. In the Middle Ages, Jabir's treatises on alchemy were translated into Latin and became standard texts for European alchemists. These include the "Kitab al-Kimya" (titled "Book of the Composition of Alchemy" in Europe), translated by Robert of Chester (1144); and the "Kitab al-Sab'een" ("Book of Seventy") by Gerard of Cremona (before 1187). Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī, Avicenna and Al-Kindi disputed the theories of alchemy, particularly the theory of the transmutation of metals; and al-Tusi described a version of the conservation of mass, noting that a body of matter is able to change but is not able to disappear. Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular is regarded as the founding father of chemistry due to his most important work, the classic chemistry text "The Sceptical Chymist" where the differentiation is made between the claims of alchemy and the empirical scientific discoveries of the new chemistry. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics; who did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. Before his work, though, many important discoveries had been made, specifically relating to the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black (the first experimental chemist) and the Dutchman J.B. van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights. The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, J.J. Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of Cambridge University discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles. His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr and Henry Moseley. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis. The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea which proved that living organisms were, in theory, reducible to chemistry. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s). Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. Other disciplines within chemistry are traditionally grouped by the type of matter being studied or the kind of study. These include inorganic chemistry, the study of inorganic matter; organic chemistry, the study of organic (carbon-based) matter; biochemistry, the study of substances found in biological organisms; physical chemistry, the study of chemical processes using physical concepts such as thermodynamics and quantum mechanics; and analytical chemistry, the analysis of material samples to gain an understanding of their chemical composition and structure. Many more specialized disciplines have emerged in recent years, e.g. neurochemistry the chemical study of the nervous system (see subdisciplines). Other fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, electrochemistry, environmental chemistry, femtochemistry, flavor chemistry, flow chemistry, geochemistry, green chemistry, histochemistry, history of chemistry, hydrogenation chemistry, immunochemistry, marine chemistry, materials science, mathematical chemistry, mechanochemistry, medicinal chemistry, molecular biology, molecular mechanics, nanotechnology, natural product chemistry, oenology, organometallic chemistry, petrochemistry, pharmacology, photochemistry, physical organic chemistry, phytochemistry, polymer chemistry, radiochemistry, solid-state chemistry, sonochemistry, supramolecular chemistry, surface chemistry, synthetic chemistry, thermochemistry, and many others. The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%.
https://en.wikipedia.org/wiki?curid=5180
Cytoplasm In cell biology, the cytoplasm is all of the material within a cell, enclosed by the cell membrane, except for the cell nucleus. The material inside the nucleus and contained within the nuclear membrane is termed the nucleoplasm. The main components of the cytoplasm are cytosol – a gel-like substance, the organelles – the cell's internal sub-structures, and various cytoplasmic inclusions. The cytoplasm is about 80% water and usually colorless. The submicroscopic ground cell substance, or cytoplasmatic matrix which remains after exclusion the cell organelles and particles is groundplasm. It is the hyaloplasm of light microscopy, and high complex, polyphasic system in which all of resolvable cytoplasmic elements of are suspended, including the larger organelles such as the ribosomes, mitochondria, the plant plastids, lipid droplets, and vacuoles. Most cellular activities take place within the cytoplasm, such as many metabolic pathways including glycolysis, and processes such as cell division. The concentrated inner area is called the endoplasm and the outer layer is called the cell cortex or the ectoplasm. Movement of calcium ions in and out of the cytoplasm is a signaling activity for metabolic processes. In plants, movement of the cytoplasm around vacuoles is known as cytoplasmic streaming. The term was introduced by Rudolf von Kölliker in 1863, originally as a synonym for protoplasm, but later it has come to mean the cell substance and organelles outside the nucleus. There has been certain disagreement on the definition of cytoplasm, as some authors prefer to exclude from it some organelles, especially the vacuoles and sometimes the plastids. The physical properties of the cytoplasm have been contested in recent years. It remains uncertain how the varied components of the cytoplasm interact to allow movement of particles and organelles while maintaining the cell's structure. The flow of cytoplasmic components plays an important role in many cellular functions which are dependent on the permeability of the cytoplasm. An example of such function is cell signalling, a process which is dependent on the manner in which signaling molecules are allowed to diffuse across the cell. While small signaling molecules like calcium ions are able to diffuse with ease, larger molecules and subcellular structures often require aid in moving through the cytoplasm. The irregular dynamics of such particles have given rise to various theories on the nature of the cytoplasm. There has long been evidence that the cytoplasm behaves like a sol-gel. It is thought that the component molecules and structures of the cytoplasm behave at times like a disordered colloidal solution (sol) and at other times like an integrated network, forming a solid mass (gel). This theory thus proposes that the cytoplasm exists in distinct fluid and solid phases depending on the level of interaction between cytoplasmic components, which may explain the differential dynamics of different particles observed moving through the cytoplasm. Recently it has been proposed that the cytoplasm behaves like a glass-forming liquid approaching the glass transition. In this theory, the greater the concentration of cytoplasmic components, the less the cytoplasm behaves like a liquid and the more it behaves as a solid glass, freezing larger cytoplasmic components in place (it is thought that the cell's metabolic activity is able to fluidize the cytoplasm to allow the movement of such larger cytoplasmic components). A cell's ability to vitrify in the absence of metabolic activity, as in dormant periods, may be beneficial as a defence strategy. A solid glass cytoplasm would freeze subcellular structures in place, preventing damage, while allowing the transmission of very small proteins and metabolites, helping to kickstart growth upon the cell's revival from dormancy. There has been research examining the motion of cytoplasmic particles independent of the nature of the cytoplasm. In such an alternative approach, the aggregate random forces within the cell caused by motor proteins explain the non-Brownian motion of cytoplasmic constituents. The three major elements of the cytoplasm are the cytosol, organelles and inclusions. The cytosol is the portion of the cytoplasm not contained within membrane-bound organelles. Cytosol makes up about 70% of the cell volume and is a complex mixture of cytoskeleton filaments, dissolved molecules, and water. The cytosol's filaments include the protein filaments such as actin filaments and microtubules that make up the cytoskeleton, as well as soluble proteins and small structures such as ribosomes, proteasomes, and the mysterious vault complexes. The inner, granular and more fluid portion of the cytoplasm is referred to as endoplasm. Due to this network of fibres and high concentrations of dissolved macromolecules, such as proteins, an effect called macromolecular crowding occurs and the cytosol does not act as an ideal solution. This crowding effect alters how the components of the cytosol interact with each other. Organelles (literally "little organs"), are usually membrane-bound structures inside the cell that have specific functions. Some major organelles that are suspended in the cytosol are the mitochondria, the endoplasmic reticulum, the Golgi apparatus, vacuoles, lysosomes, and in plant cells, chloroplasts. The inclusions are small particles of insoluble substances suspended in the cytosol. A huge range of inclusions exist in different cell types, and range from crystals of calcium oxalate or silicon dioxide in plants, to granules of energy-storage materials such as starch, glycogen, or polyhydroxybutyrate. A particularly widespread example are lipid droplets, which are spherical droplets composed of lipids and proteins that are used in both prokaryotes and eukaryotes as a way of storing lipids such as fatty acids and sterols. Lipid droplets make up much of the volume of adipocytes, which are specialized lipid-storage cells, but they are also found in a range of other cell types. The cytoplasm, mitochondria and most organelles are contributions to the cell from the maternal gamete. Contrary to the older information that disregards any notion of the cytoplasm being active, new research has shown it to be in control of movement and flow of nutrients in and out of the cell by viscoplastic behavior and a measure of the reciprocal rate of bond breakage within the cytoplasmic network. The material properties of the cytoplasm remain an ongoing investigation. Recent measurements using force spectrum microscopy reveal that the cytoplasm can be likened to an elastic solid, rather than a viscoelastic fluid.
https://en.wikipedia.org/wiki?curid=5184
Christ (title) The concept of the Christ in Christianity originated from the concept of the messiah in Judaism. Christians believe that Jesus is the messiah foretold in the Hebrew Bible and the Christian Old Testament. Although the conceptions of the messiah in each religion are similar, for the most part they are distinct from one another due to the split of early Christianity and Judaism in the 1st century. "Christ", used by Christians as both a name and a title, is synonymous with Jesus. It is also used as a title, in the reciprocal use "Christ Jesus", meaning "the Messiah Jesus", and independently as "the Christ". The Pauline epistles, the earliest texts of the New Testament, often refer to Jesus as "Christ Jesus" or "Christ". Although the original followers of Jesus believed Jesus to be the Jewish messiah, e.g. in the Confession of Peter, Jesus was usually referred to as "Jesus of Nazareth" or "Jesus, son of Joseph". Jesus came to be called "Jesus Christ" (meaning "Jesus the "Khristós"", i.e. "Jesus the Messiah" or "Jesus the Anointed") by later Christians, who believe that his crucifixion and resurrection fulfill the messianic prophecies of the Old Testament. Christ comes from the Greek word ("chrīstós"), meaning "anointed one". The word is derived from the Greek verb ("chrī́ō"), meaning "to anoint." In the Greek Septuagint, "christos" was used to translate the Hebrew מָשִׁיחַ ("Mašíaḥ," messiah), meaning "[one who is] anointed". In the Old Testament, anointing was reserved to the Kings of Israel, to the High Priest of Israel (Exodus 29:7, Leviticus 4:3–16), and to the prophets (1 Kings 19:16). According to the "Summa Theologica" of Thomas Aquinas, in the singular case of Jesus, the word "Christ" has a twofold meaning, which stands for "both the Godhead anointing and the manhood anointed". It derives from the twofold human-divine nature of Christ (dyophysitism): the Son of man is anointed in consequence of His incarnated flesh, as well as the Son of God is anointing in consequence of the "Godhead which He has with the Father" (ST "III", q. 16, a. 5). The word "Christ" (and similar spellings) appears in English and in most European languages. English-speakers now often use "Christ" as if it were a name, one part of the name "Jesus Christ", though it was originally a title ("the Messiah"). Its usage in "Christ Jesus" emphasizes its nature as a title. Compare the usage "the Christ". The spelling "Christ" in English became standardized in the 18th century, when, in the spirit of the Enlightenment, the spelling of certain words changed to fit their Greek or Latin origins. Prior to this, scribes writing in Old and Middle English usually used the spelling "Crist" - the "i" being pronounced either as , preserved in the names of churches such as St Katherine Cree, or as a short , preserved in the modern pronunciation of "Christmas". The spelling "Christ" in English is attested from the 14th century. In modern and ancient usage, even in secular terminology, "Christ" usually refers to Jesus, based on the centuries-old tradition of such usage. Since the Apostolic Age, the[...] use of the definite article before the word Christ and its gradual development into a proper name show the Christians identified the bearer with the promised Messias of the Jews. In the Ancient Greek text of the deuterocanonical books, the term "Christ" (Χριστός, translit. Christós) is found in 2 Maccabees 1:10 (referring to the anointed High Priest of Israel) and in the Book of Sirach 46:19, in relation to Samuel, prophet and institutor of the kingdom under Saul. At the time of Jesus, there was no single form of Second Temple Judaism, and there were significant political, social, and religious differences among the various Jewish groups. However, for centuries the Jews had used the term "moshiach" ("anointed") to refer to their expected deliverer. The New Testament states that the long-awaited messiah had come and describes this savior as "the Christ". In , the apostle Peter said, in what has become a famous proclamation of faith among Christians since the first century, "You are the Christ, the Son of the living God." Mark ("The beginning of the gospel of Jesus Christ, the Son of God") identifies Jesus as both Christ and the Son of God. uses Christ as a name and Matthew explains it again with: "Jesus, who is called Christ". The use of the definite article before the word "Christ" and its gradual development into a proper name show that the Christians identified Jesus with the promised messiah of the Jews who fulfilled all the messianic predictions in a fuller and a higher sense than had been given them by the rabbis. The Gospels of Mark and Matthew begin by calling Jesus both Christ and the Son of God, but these are two distinct attributions. They develop in the New Testament along separate paths and have distinct theological implications. At the time in Roman Judaea, the Jews had been awaiting the "Messiah", and many people were wondering who it would be. When John the Baptist appeared and began preaching, he attracted disciples who assumed that he would be announced as the messiah, or "the one" that they had been awaiting. In Martha told Jesus, "you are the Christ, the Son of God, who is coming into the world", signifying that both titles were generally accepted (yet considered distinct) among the followers of Jesus before the raising of Lazarus. In the trial of Jesus before the Sanhedrin and Pontius Pilate, it might appear from the narratives of Matthew and Luke that Jesus at first refused a direct reply to the high priest's question: "Art thou the Christ?", where his answer is given merely as ""su eipas"" ("thou hast said it"). The Gospel of Mark, however, states the answer as ""ego eimi"" ("I am"), and there are instances from Jewish literature in which the expression "thou hast said it" is equivalent to "you are right". The Messianic claim was less significant than the claim to divinity, which caused the high priest's horrified accusation of blasphemy and the subsequent call for the death sentence. Before Pilate, on the other hand, it was merely the assertion of his royal dignity which gave grounds for his condemnation. The word "Christ" is closely associated with Jesus in the Pauline epistles, which suggests that there was no need for the early Christians to claim that Jesus is Christ because it was considered widely accepted among them. Hence Paul can use the term "Khristós" with no confusion as to whom it refers, and he can use expressions such as "in Christ" to refer to the followers of Jesus, as in and . Paul proclaimed him as the Last Adam, who restored through obedience what Adam lost through disobedience. The Pauline epistles are a source of some key Christological connections; e.g., relates the love of Christ to the knowledge of Christ, and considers the love of Christ as a necessity for knowing him. There are also implicit claims to him being the Christ in the words and actions of Jesus. Episodes in the life of Jesus and statements about what he accomplished during his public ministry are found throughout the New Testament. Christology, literally "the understanding of Christ," is the study of the nature (person) and work (role in salvation) of Jesus in Christianity. It studies Jesus Christ's humanity and divinity, and the relation between these two aspects; and the role he plays in salvation. The earliest Christian writings gave several titles to Jesus, such as Son of Man, Son of God, Messiah, and Kyrios, which were all derived from the Hebrew scriptures. These terms centered around two themes, namely "Jesus as a preexistent figure who becomes human and then returns to God," and "Jesus as a creature elected and 'adopted' by God." From the second to the fifth centuries, the relation of the human and divine nature of Christ was a major focus of debates in the early church and at the first seven ecumenical councils. The Council of Chalcedon in 451 issued a formulation of the hypostatic union of the two natures of Christ, one human and one divine, "united with neither confusion nor division". Most of the major branches of Western Christianity and Eastern Orthodoxy subscribe to this formulation, while many branches of Oriental Orthodox Churches reject it, subscribing to miaphysitism. The use of "Χ", derived from Chi, the Greek alphabet initial, as an abbreviation for Christ (most commonly in the abbreviation "Χmas") is often misinterpreted as a modern secularization of the term. Thus understood, the centuries-old English word Χmas, is actually a shortened form of CHmas, which is, itself, a shortened form for Christmas. Christians are sometimes referred to as "Xians", with the 'X' replacing 'Christ. A very early Christogram is the "Chi Rho" symbol formed by superimposing the first two Greek letters in Christ (), chi = ch and rho = r, to produce ☧.
https://en.wikipedia.org/wiki?curid=5185
Central Europe Central Europe is the region comprising the central part of Europe. Central Europe occupies continuous territories that are otherwise sometimes considered parts of Western Europe, Southern Europe and Eastern Europe. The concept of Central Europe is based on a common historical, social and cultural identity: linguistically, Central Europe includes lands where various dialects of German have been spoken as the first language, as well as countries, where German historically was and to some extent still is, the most important lingua franca. In terms of religion, Central Europe is a patchwork of traditionally catholic and protestant territories, as well as the cradle of protestantism. The struggle between catholicism and protestantism was a significant shaping process in the history of Central Europe, where neither side was able to prevail in the region as a whole. Historically, Central Europe comprised of the most territories of the Holy Roman Empire, as well as the territories belonging to the two adjacent kingdoms to the east (Poland and Hungary). Hungary and parts of Poland were later parts of the Habsburg Monarchy, which was also a significant shaping force in its history. Unlike their Western European counterparts, Central European states barely had any overseas colonies, which was due to their central location and other factors; the fact is often cited as one of the causes of the World War I. After World War II, Central Europe was divided by the Iron Curtain to the parts belonging to the West and those of the Eastern bloc. The Berlin wall was the most visible symbol of this division. Central Europe is going through a "strategic awakening", with initiatives such as the Central European Initiative (CEI), Centrope, and the Visegrád Four Group. While the region's economies show considerable disparities of income, all Central European countries are listed by the Human Development Index as very highly developed. Elements of cultural unity for Northwestern, Southwestern and Central Europe were Catholicism and Latin. However Eastern Europe, which remained Eastern Orthodox, was the area of Graeco-Byzantine cultural influence; after the East-West Schism (1054), Eastern Europe developed cultural unity and resistance to the Catholic (and later also Protestant) Western world within the framework of Orthodox Church, Church Slavonic language and the Cyrillic alphabet. According to Hungarian historian Jenő Szűcs, foundations of Central European history at the first millennium were in close connection with Western European development. He explained that between the 11th and 15th centuries not only Christianization and its cultural consequences were implemented, but well-defined social features emerged in Central Europe based on Western characteristics. The keyword of Western social development after millennium was the spread of liberties and autonomies in Western Europe. These phenomena appeared in the middle of the 13th century in Central European countries. There were self-governments of towns, counties and parliaments. In 1335, under the rule of the King Charles I of Hungary, the castle of Visegrád, the seat of the Hungarian monarchs was the scene of the royal summit of the Kings of Poland, Bohemia and Hungary. They agreed to cooperate closely in the field of politics and commerce, inspiring their post-Cold War successors to launch a successful Central European initiative. In the Middle Ages, countries in Central Europe adopted Magdeburg rights. Before 1870, the industrialization that had started to develop in Northwestern and Central Europe and the United States did not extend in any significant way to the rest of the world. Even in Eastern Europe, industrialization lagged far behind. Russia, for example, remained largely rural and agricultural, and its autocratic rulers kept the peasants in serfdom. The concept of Central Europe was already known at the beginning of the 19th century, but its real life began in the 20th century and immediately became an object of intensive interest. However, the very first concept mixed science, politics and economy – it was strictly connected with intensively growing German economy and its aspirations to dominate a part of European continent called "Mitteleuropa". The German term denoting Central Europe was so fashionable that other languages started referring to it when indicating territories from Rhine to Vistula, or even Dnieper, and from the Baltic Sea to the Balkans. An example of that-time vision of Central Europe may be seen in J. Partsch's book of 1903. On 21 January 1904, "Mitteleuropäischer Wirtschaftsverein" (Central European Economic Association) was established in Berlin with economic integration of Germany and Austria–Hungary (with eventual extension to Switzerland, Belgium and the Netherlands) as its main aim. Another time, the term Central Europe became connected to the German plans of political, economic and cultural domination. The "bible" of the concept was Friedrich Naumann's book "Mitteleuropa" in which he called for an economic federation to be established after World War I. Naumann's idea was that the federation would have at its centre Germany and the Austro-Hungarian Empire but would also include all European nations outside the Triple Entente. The concept failed after the German defeat in World War I and the dissolution of Austria-Hungary. The revival of the idea may be observed during the Hitler era. According to Emmanuel de Martonne, in 1927 the Central European countries included: Austria, Czechoslovakia, Germany, Hungary, Poland, Romania and Switzerland. The author uses both Human and Physical Geographical features to define Central Europe, but he doesn't take into account the legal development, or the social, cultural, economic, infrastructural developments in these countries. The interwar period (1918–1939) brought a new geopolitical system, as well as economic and political problems, and the concept of Central Europe took on a different character. The centre of interest was moved to its eastern part – the countries that have (re)appeared on the map of Europe: Czechoslovakia, Hungary and Poland. Central Europe ceased to be the area of German aspiration to lead or dominate and became a territory of various integration movements aiming at resolving political, economic and national problems of "new" states, being a way to face German and Soviet pressures. However, the conflict of interests was too big and neither Little Entente nor Intermarium ("Międzymorze") ideas succeeded. The interwar period brought new elements to the concept of Central Europe. Before World War I, it embraced mainly German states (Germany, Austria), non-German territories being an area of intended German penetration and domination – German leadership position was to be the natural result of economic dominance. After the war, the Eastern part of Central Europe was placed at the centre of the concept. At that time the scientists took an interest in the idea: the International Historical Congress in Brussels in 1923 was committed to Central Europe, and the 1933 Congress continued the discussions. Hungarian historian Magda Ádám wrote in her study "Versailles System and Central Europe" (2006): "Today we know that the bane of Central Europe was the Little Entente, military alliance of Czechoslovakia, Romania and Kingdom of Serbs, Croats and Slovenes (later Yugoslavia), created in 1921 not for Central Europe's cooperation nor to fight German expansion, but in a wrong perceived notion that a completely powerless Hungary must be kept down". The avant-garde movements of Central Europe were an essential part of modernism's evolution, reaching its peak throughout the continent during the 1920s. The "Sourcebook of Central European avantgards" (Los Angeles County Museum of Art) contains primary documents of the avant-gardes in Austria, Czechoslovakia, Germany, Hungary, and Poland from 1910 to 1930. The manifestos and magazines of Western European radical art circles are well known to Western scholars and are being taught at primary universities of their kind in the western world. "Mitteleuropa" may refer to an historical concept, or to a contemporary German definition of Central Europe. As an historical concept, the German term "Mitteleuropa" (or alternatively its literal translation into English, "Middle Europe") is an ambiguous German concept. It is sometimes used in English to refer to an area somewhat larger than most conceptions of 'Central Europe'; it refers to territories under Germanic cultural hegemony until World War I (encompassing Austria–Hungary and Germany in their pre-war formations but usually excluding the Baltic countries north of East Prussia). According to Fritz Fischer "Mitteleuropa" was a scheme in the era of the Reich of 1871–1918 by which the old imperial elites had allegedly sought to build a system of German economic, military and political domination from the northern seas to the Near East and from the Low Countries through the steppes of Russia to the Caucasus. Later on, professor Fritz Epstein argued the threat of a Slavic "Drang nach Westen" (Western expansion) had been a major factor in the emergence of a "Mitteleuropa" ideology before the Reich of 1871 ever came into being. In Germany the connotation was also sometimes linked to the pre-war German provinces east of the Oder-Neisse line. The term "Mitteleuropa" conjures up negative historical associations among some elderly people, although the Germans have not played an exclusively negative role in the region. Most Central European Jews embraced the enlightened German humanistic culture of the 19th century. German-speaking Jews from turn of the 20th century Vienna, Budapest and Prague became representatives of what many consider to be Central European culture at its best, though the Nazi version of "Mitteleuropa" destroyed this kind of culture instead. However, the term "Mitteleuropa" is now widely used again in German education and media without negative meaning, especially since the end of communism. In fact, many people from the new states of Germany do not identify themselves as being part of Western Europe and therefore prefer the term "Mitteleuropa". During World War II, Central Europe was largely occupied by Nazi Germany. Many areas were a battle area and were devastated. The mass murder of the Jews depopulated many of their centuries-old settlement areas or settled other people there and their culture was wiped out. Both Adolf Hitler and Joseph Stalin diametrically opposed the centuries-old Habsburg principles of "live and let live" with regard to ethnic groups, peoples, minorities, religions, cultures and languages and tried to assert their own ideologies and power interests in Central Europe. There were various Allied plans for state order in Central Europe for post-war. While Stalin tried to get as many states and his control as possible, Winston Churchill preferred a Central European Danube Confederation to counter these countries against Germany and Russia. There were also plans to add Bavaria and Württemberg to an enlarged Austria. There were also various resistance movements around Otto von Habsburg that pursued this goal. The group around the Austrian priest Heinrich Maier also planned in this direction, which also successfully helped the Allies to wage war by, among other things, forwarding production sites and plans for V-2 rockets, Tiger tanks and aircraft to the USA. So Otto von Habsburg also tried to detach Hungary from its grasp by Nazi Germany and the USSR. There were various considerations to prevent German power in Europe after the war. Churchill's idea of reaching the area around Vienna and Budapest before the Russians via an operation from the Adriatic had not been approved by the Western Allied chiefs of staff. As a result of the military situation at the end of the war, Stalin's plans prevailed and much of Central Europe came under Russian control. Following World War II, large parts of Europe that were culturally and historically Western became part of the Eastern bloc. Czech author Milan Kundera (emigrant to France) thus wrote in 1984 about the "Tragedy of Central Europe" in the New York Review of Books. The boundary between the two blocks was called the Iron Curtain. Consequently, the English term "Central Europe" was increasingly applied only to the westernmost former Warsaw Pact countries (East Germany, Poland, Czechoslovakia, Hungary) to specify them as communist states that were culturally tied to Western Europe. This usage continued after the end of the Warsaw Pact when these countries started to undergo transition. The post-World War II period brought blocking of research on Central Europe in the Eastern Bloc countries, as its every result proved the dissimilarity of Central Europe, which was inconsistent with the Stalinist doctrine. On the other hand, the topic became popular in Western Europe and the United States, much of the research being carried out by immigrants from Central Europe. At the end of communism, publicists and historians in Central Europe, especially the anti-communist opposition, returned to their research. According to Karl A. Sinnhuber ("Central Europe: Mitteleuropa: Europe Centrale: An Analysis of a Geographical Term") most Central European states were unable to preserve their political independence and became Soviet Satellite Europe. Besides Austria, only the marginal European states of Finland and Yugoslavia preserved their political sovereignty to a certain degree, being left out of any military alliances in Europe. The opening of the Iron Curtain between Austria and Hungary at the Pan-European Picnic on August 19, 1989 then set in motion a peaceful chain reaction, at the end of which there was no longer a East Germany and the Eastern Bloc had disintegrated. It was the largest escape movement from East Germany since the Berlin Wall was built in 1961. After the picnic, which was based on an idea by Otto von Habsburg to test the reaction of the USSR and Mikhail Gorbachev to an opening of the border, tens of thousands of media-informed East Germans set off for Hungary. The leadership of the GDR in East Berlin did not dare to completely block the borders of their own country and the USSR did not respond at all. This broke the bracket of the Eastern Bloc and Central Europe subsequently became free from communism. According to American professor Ronald Tiersky, the 1991 summit held in Visegrád, Hungary and attended by the Polish, Hungarian and Czechoslovak presidents was hailed at the time as a major breakthrough in Central European cooperation, but the Visegrád Group became a vehicle for coordinating Central Europe's road to the European Union, while development of closer ties within the region languished. American professor Peter J. Katzenstein described Central Europe as a way station in a Europeanization process that marks the transformation process of the Visegrád Group countries in different, though comparable ways. According to him, in Germany's contemporary public discourse "Central European identity" refers to the civilizational divide between Catholicism and Eastern Orthodoxy. He says there's no precise, uncontestable way to decide whether the Baltic states, Serbia, Croatia, Slovenia, Romania, and Bulgaria are parts of Central Europe or not. Rather than a physical entity, Central Europe is a concept of shared history which contrasts with that of the surrounding regions. The issue of how to name and define the Central European region is subject to debates. Very often, the definition depends on the nationality and historical perspective of its author. The main proposed regional definitions, gathered by Polish historian Jerzy Kłoczowski, include: Former University of Vienna professor Lonnie R. Johnson points out criteria to distinguish Central Europe from Western, Eastern and Southeast Europe: He also thinks that Central Europe is a dynamic historical concept, not a static spatial one. For example, Lithuania, a fair share of Belarus and western Ukraine are in Eastern Europe today, but years ago they were in Polish–Lithuanian Commonwealth. Johnson's study on Central Europe received acclaim and positive reviews in the scientific community. However, according to Romanian researcher Maria Bucur this very ambitious project suffers from the weaknesses imposed by its scope (almost 1600 years of history). "The Columbia Encyclopedia" defines Central Europe as: Germany, Switzerland, Liechtenstein, Austria, Poland, the Czech Republic, Slovakia, and Hungary. The World Factbook uses a similar definition and adds also Slovenia. Encarta Encyclopedia and Encyclopædia Britannica do not clearly define the region, but Encarta places the same countries into Central Europe in its individual articles on countries, adding Slovenia in "south central Europe". The German Encyclopaedia "Meyers Grosses Taschenlexikon" ("Meyers Big Pocket Encyclopedia"), 1999, defines Central Europe as the central part of Europe with no precise borders to the East and West. The term is mostly used to denominate the territory between the Schelde to Vistula and from the Danube to the Moravian Gate. Usually the countries considered to be Central European are Austria, Croatia, the Czech Republic, Germany, Hungary, Poland, Slovakia, Slovenia, Switzerland; in the broader sense Romania and Serbia too, occasionally also Belgium, the Netherlands, and Luxembourg. According to "Meyers Enzyklopädisches Lexikon", Central Europe is a part of Europe composed of Austria, Belgium, Czechoslovakia, Germany, Hungary, Luxembourg, Netherlands, Poland, Romania and Switzerland, and northern marginal regions of Italy and Yugoslavia (northern states – Croatia, Serbia and Slovenia), as well as northeastern France. The German (Standing Committee on Geographical Names), which develops and recommends rules for the uniform use of geographical names, proposes two sets of boundaries. The first follows international borders of current countries. The second subdivides and includes some countries based on cultural criteria. In comparison to some other definitions, it is broader, including Luxembourg, Croatia, the Baltic states, and in the second sense, parts of Russia, Belarus, Ukraine, Romania, Serbia, Italy, and France. There is no general agreement either on what geographic area constitutes Central Europe, nor on how to further subdivide it geographically. At times, the term "Central Europe" denotes a geographic definition as the Danube region in the heart of the continent, including the language and culture areas which are today included in the states of Croatia, the Czech Republic, Hungary, Poland, Serbia, Slovakia, Slovenia and usually also Austria and Germany, but "never" Russia and other countries of the former Soviet Union towards the Ural mountains. The terminology EU11 countries refer the Central, Eastern and Baltic European member states which accessed in 2004 and after: in 2004 the Czech Republic, Estonia, Latvia, Lithuania, Hungary, Poland, Slovenia, and the Slovak Republic; in 2007 Bulgaria, Romania; and in 2013 Croatia. The comprehension of the concept of "Central Europe" is an ongoing source of controversy, though the Visegrád Group constituents are almost always included as "de facto" Central European countries. Although views on which countries belong to Central Europe are vastly varied, according to many sources (see section Definitions) the region includes the states listed in the sections below. Depending on context, Central European countries are sometimes grouped as Eastern or Western European countries, collectively or individually but some place them in Eastern Europe instead: for instance Austria can be referred to as Central European, as well as Eastern European or Western European. Some sources also add neighbouring countries for historical reasons (the former Austro-Hungarian and German Empires, and modern Baltic states), or based on geographical and/or cultural reasons: The Baltic states, geographically in Northern Europe, have been considered part of Central Europe in the German tradition of the term, "Mitteleuropa". Benelux countries are generally considered a part of Western Europe, rather than Central Europe. Nevertheless, they are occasionally mentioned in the Central European context due to cultural, historical and linguistic ties. The following states or some of their regions may sometimes be included in Central Europe: Geography defines Central Europe's natural borders with the neighbouring regions to the north across the Baltic Sea, namely Northern Europe (or Scandinavia), and to the south across the Alps, the Apennine peninsula (or Italy), and the Balkan peninsula across the Soča-Krka-Sava-Danube line. The borders to Western Europe and Eastern Europe are geographically less defined, and for this reason the cultural and historical boundaries migrate more easily west–east than south–north. The Rhine river, which runs south–north through Western Germany, is an exception. Southwards, the Pannonian Plain is bounded by the rivers Sava and Danube – and their respective floodplains. The Pannonian Plain stretches over the following countries: Austria, Croatia, Hungary, Romania, Serbia, Slovakia and Slovenia, and touches borders of Bosnia and Herzegovina (Republika Srpska) and Ukraine ("peri- Pannonian states"). As southeastern division of the Eastern Alps, the Dinaric Alps extend for 650 kilometres along the coast of the Adriatic Sea (northwest-southeast), from the Julian Alps in the northwest down to the Šar-Korab massif, north–south. According to the Freie Universität Berlin, this mountain chain is classified as South Central European. The Central European flora region stretches from Central France (the Massif Central) to Central Romania (Carpathians) and Southern Scandinavia. Central Europe is one of the continent's most populous regions. It includes countries of varied sizes, ranging from tiny Liechtenstein to Germany, the largest European country by population (that is entirely placed in Europe). Demographic figures for countries entirely located within notion of Central Europe ("the core countries") number around 165 million people, out of which around 82 million are residents of Germany. Other populations include: Poland with around 38.5 million residents, Czech Republic at 10.5 million, Hungary at 10 million, Austria with 8.8 million, Switzerland with 8.5 million, Slovakia at 5.4 million, and Liechtenstein at a bit less than 40,000. If the countries which are occasionally included in Central Europe were counted in, partially or in whole – Croatia (4.3 million), Slovenia (2 million, 2014 estimate), Romania (20 million), Lithuania (2.9 million), Latvia (2 million), Estonia (1.3 million), Serbia (7.1 million) – it would contribute to the rise of between 25–35 million, depending on whether regional or integral approach was used. If smaller, western and eastern historical parts of Central Europe would be included in the demographic corpus, further 20 million people of different nationalities would also be added in the overall count, it would surpass the 200 million people figure. Currently, the members of the Eurozone include Austria, Germany, Luxembourg, Slovakia, and Slovenia. Croatia, the Czech Republic, Hungary and Poland use their currencies (Croatian kuna, Czech koruna, Hungarian forint, Polish złoty), but are obliged to adopt the Euro. Switzerland uses its own currency – Swiss franc, Serbia too (Serbian dinar), as well as Romania (Romanian leu). In 2018, Switzerland topped the HDI list among Central European countries, also ranking #2 in the world. Serbia rounded out the list at #11 (67 world). The index of globalization in Central European countries (2016 data): Switzerland topped this list as well (#1 world). Legatum Prosperity Index demonstrates an average and high level of prosperity in Central Europe (2018 data). Switzerland topped the index (#4 world). Most countries in Central Europe tend to score above the average in the Corruption Perceptions Index (2018 data), led by Switzerland, Germany, and Austria. Industrialisation occurred early in Central Europe. That caused construction of rail and other types of infrastructure. Central Europe contains the continent's earliest railway systems, whose greatest expansion was recorded in Austro-Hungarian and German territories between 1860-1870s. By the mid-19th century Berlin, Vienna, and Buda/Pest were focal points for network lines connecting industrial areas of Saxony, Silesia, Bohemia, Moravia and Lower Austria with the Baltic (Kiel, Szczecin) and Adriatic (Rijeka, Trieste). Rail infrastructure in Central Europe remains the densest in the world. Railway density, with total length of lines operated (km) per 1,000 km2, is the highest in the Czech Republic (198.6), Poland (121.0), Slovenia (108.0), Germany (105.5), Hungary (98.7), Serbia (87.3), Slovakia (73.9) and Croatia (72.5). when compared with most of Europe and the rest of the world. Before the first railroads appeared in the 1840s, river transport constituted the main means of communication and trade. Earliest canals included Plauen Canal (1745), Finow Canal, and also Bega Canal (1710) which connected Timișoara to Novi Sad and Belgrade via Danube. The most significant achievement in this regard was the facilitation of navigability on Danube from the Black sea to Ulm in the 19th century. Compared to most of Europe, the economies of Austria, Croatia, the Czech Republic, Germany, Hungary, Poland, Slovakia, Slovenia and Switzerland tend to demonstrate high complexity. Industrialisation has reached Central Europe relatively early: Luxembourg and Germany by 1860, the Czech Republic, Poland, Slovakia and Switzerland by 1870, Austria, Croatia, Hungary, Liechtenstein, Romania, Serbia and Slovenia by 1880. Central European countries are some of the most significant food producers in the world. Germany is the world's largest hops producer with 34.27% share in 2010, third producer of rye and barley, 5th rapeseed producer, sixth largest milk producer, and fifth largest potato producer. Poland is the world's largest triticale producer, second largest producer of raspberry, currant, third largest of rye, the fifth apple and buckwheat producer, and seventh largest producer of potatoes. The Czech Republic is world's fourth largest hops producer and 8th producer of triticale. Hungary is world's fifth hops and seventh largest triticale producer. Serbia is world's second largest producer of plums and second largest of raspberries. Slovenia is world's sixth hops producer. Central European business has a regional organisation, Central European Business Association (CEBA), founded in 1996 in New York as a non-profit organization dedicated to promoting business opportunities within Central Europe and supporting the advancement of professionals in America with a Central European background. Central European countries, especially Austria, Croatia, Germany and Switzerland are some of the most competitive tourism destinations. Poland is presently a major destination for outsourcing. Kraków, Warsaw, and Wrocław (Poland), Prague and Brno (Czech Republic), Budapest (Hungary), Bucharest (Romania), Bratislava (Slovakia), Ljubljana (Slovenia), Belgrade (Serbia) and Zagreb (Croatia) are among the world's top 100 outsourcing destinations. Various languages are taught in Central Europe, with certain languages being more popular in different countries. Student performance has varied across Central Europe, according to the Programme for International Student Assessment. In the 2012 study, countries scored medium, below or over the average scores in three fields studied. The first university east of France and north of the Alps was the Charles University in Prague established in 1347 or 1348 by Charles IV, Holy Roman Emperor and modeled on the University of Paris, with the full number of faculties (law, medicine, philosophy and theology). The Central European University (CEU) is a graduate-level, English-language university promoting a distinctively Central European perspective. It was established in 1991 by the Hungarian philanthropist George Soros, who has provided an endowment of US$880 million, making the university one of the wealthiest in Europe. In the academic year 2013/2014, the CEU had 1,381 students from 93 countries and 388 faculty members from 58 countries. Research centres of Central European literature include Harvard (Cambridge, MA), and Purdue University. Central European countries are mostly Catholic (Austria, Croatia, Hungary, Liechtenstein, Luxembourg, Poland, Slovakia, Slovenia) or mixed Catholic and Protestant, (Germany and Switzerland). Large Protestant groups include Lutheran and Calvinist. Significant populations of Eastern Catholicism and Old Catholicism are also prevalent throughout Central Europe. Central Europe has been a centre of Protestantism in the past; however, it has been mostly eradicated by the Counterreformation. The Czech Republic (Bohemia) was historically the first Protestant country, then violently recatholised, and now overwhelmingly non-religious, nevertheless the largest number of religious people are Catholic (10.3%). Romania and Serbia are mostly Eastern Orthodox with significant Protestant and Catholic minorities. Before the Holocaust (1941–45), there was also a sizeable Ashkenazi Jewish community in the region, numbering approximately 16.7 million people. In some of these countries, there is a number of atheists, undeclared and non-religious people: the Czech Republic (non-religious 34.2% and undeclared 45.2%), Germany (non-religious 38%), Slovenia (atheist 14.7%), Luxembourg (23.4 non-religious), Switzerland (20.1%), Hungary (27.2% undeclared, 16.7% "non-religious" and 1.5% atheists), Slovakia (atheists and non-religious 13.4%, "not specified" 10.6%) Austria (19.7% of "other or none"), Liechtenstein (10.6% with no religion), Croatia (4%) and Poland (3% of non-believers/agnostics and 1% of undeclared). Central European cuisine has evolved through centuries due to social and political change. Most countries share many dishes. The most popular dishes typical to Central Europe are sausages and cheeses, where the earliest evidence of cheesemaking in the archaeological record dates back to 5,500 BCE (Kujawy, Poland). Other foods widely associated with Central Europe are goulash and beer. List of countries by beer consumption per capita is led by the Czech Republic, followed by Germany and Austria. Poland comes 5th, Croatia 7th and Slovenia 13th. Human rights have a long tradition in Central Europe. In 1222 Hungary defined for the first time the rights of the nobility in its "Golden Bull". In 1264 the Statute of Kalisz and the General Charter of Jewish Liberties introduced numerous rights for the Jews in Poland, granting them de facto autonomy. In 1783 for the first time, Poland forbid corporal punishment of children in schools. In the same year, a German state of Baden banned slavery. On the other hand, there were also major regressions, such as "Nihil novi" in Poland in 1505 which forbade peasants from leaving their land without permission from their feudal lord. Generally, the countries in the region are progressive on the issue of human rights: death penalty is illegal in all of them, corporal punishment is outlawed in most of them and people of both genders can vote in elections. Nevertheless, Central European countries struggle to adopt new generations of human rights, such as same-sex marriage. Austria, the Czech Republic, Germany, and Poland also have a history of participation in the CIA's extraordinary rendition and detention program, according to the Open Society Foundation. Regional writing tradition revolves around the turbulent history of the region, as well as its cultural diversity. Its existence is sometimes challenged. Specific courses on Central European literature are taught at Stanford University, Harvard University and Jagiellonian University The as well as cultural magazines dedicated to regional literature. Angelus Central European Literature Award is an award worth 150,000.00 PLN (about $50,000 or £30,000) for writers originating from the region. Likewise, the Vilenica International Literary Prize is awarded to a Central European author for "outstanding achievements in the field of literature and essay writing." There is a number of Central European Sport events and leagues. They include: Football is one of the most popular sports. Countries of Central Europe had many great national teams throughout history and hosted several major competitions. Yugoslavia hosted UEFA Euro 1976 before the competition expanded to 8 teams and Germany (at that times as West Germany) hosted UEFA Euro 1988. Recently, 2008 and 2012 UEFA European Championships were held in Austria & Switzerland and Poland & Ukraine respectively. Germany hosted 2 FIFA World Cups (1974 and 2006) and are the current champions (as of 2014). Central Europe is a birthplace of regional political organisations: Central Europe is a home to some of world's oldest democracies. However, most of them have been impacted by totalitarianism, particularly Fascism and Nazism. Germany and Italy occupied all Central European countries, except Switzerland. In all occupied countries, the Axis powers suspended democracy and installed puppet regimes loyal to the occupation forces. Also, they forced conquered countries to aplly racial laws and formed military forces for helping German and Italian struggle against Communists. After World War II, almost the whole of Central Europe (the Eastern and Middle part) was occupied by Communists. Communism also banned democracy and free elections, and human rights did not exist in Communist countries. Most of Central Europe had been occupied and later allied with the Soviet Union, often against their will through forged referendum (e.g., Polish people's referendum in 1946) or force (northeast Germany, Poland, Hungary et alia). Nevertheless, these experiences have been dealt in most of them. Most of Central European countries score very highly in the Democracy Index. In spite of its turbulent history, Central Europe is currently one of world's safest regions. Most Central European countries are in top 20%. The time zone used in most parts of the European Union is a standard time which is 1 hour ahead of Coordinated Universal Time. It is commonly called Central European Time because it has been first adopted in central Europe (by year): Central Europe is mentioned in 35th episode of Lovejoy, entitled "The Prague Sun", filmed in 1992. While walking over the famous Charles Bridge, the main character, Lovejoy says: " I've never been to Prague before. Well, it is one of the great unspoiled cities in Central Europe. Notice: I said: "Central", not "Eastern"! The Czechs are a bit funny about that, they think of Eastern Europeans as "turnip heads"." Wes Anderson's Oscar-winning film The Grand Budapest Hotel is regarded as a fictionalised celebration of the 1930s in Central Europe, and the region's musical tastes.
https://en.wikipedia.org/wiki?curid=5188
Geography of Canada Canada has a vast geography that occupies much of the continent of North America, sharing land borders with the contiguous United States to the south and the U.S. state of Alaska to the northwest. Canada stretches from the Atlantic Ocean in the east to the Pacific Ocean in the west; to the north lies the Arctic Ocean. Greenland is to the northeast and to the southeast Canada shares a maritime boundary with France's overseas collectivity of Saint Pierre and Miquelon, the last vestige of New France. By total area (including its waters), Canada is the second-largest country in the world, after Russia. By land area alone, however, Canada ranks fourth, the difference being due to it having the world's largest proportion of fresh water lakes. Of Canada's thirteen provinces and territories, only two are landlocked (Alberta and Saskatchewan) while the other eleven all directly border one of three oceans. Canada is home to the world's northernmost settlement, Canadian Forces Station Alert, on the northern tip of Ellesmere Island—latitude 82.5°N—which lies from the North Pole. Much of the Canadian Arctic is covered by ice and permafrost. Canada has the longest coastline in the world, with a total length of ; additionally, its border with the United States is the world's longest land border, stretching . Three of Canada's Arctic islands, Baffin Island, Victoria Island and Ellesmere Island, are among the ten largest in the world. Since the end of the last glacial period, Canada has consisted of eight distinct forest regions, including extensive boreal forest on the Canadian Shield; 42 percent of the land acreage of Canada is covered by forests (approximately 8 percent of the world's forested land), made up mostly of spruce, poplar and pine. Canada has over 2,000,000 lakes—563 greater than —which is more than any other country, containing much of the world's fresh water. There are also freshwater glaciers in the Canadian Rockies, the Coast Mountains and the Arctic Cordillera. Canada is geologically active, having many earthquakes and potentially active volcanoes, notably the Mount Meager massif, Mount Garibaldi, the Mount Cayley massif, and the Mount Edziza volcanic complex. Average winter and summer high temperatures across Canada range from Arctic weather in the north, to hot summers in the southern regions, with four distinct seasons. Canada has a diverse climate. The climate varies from temperate on the west coast of British Columbia to a subarctic climate in the north. Extreme northern Canada can have snow for most of the year with a Polar climate. Landlocked areas tend to have a warm summer continental climate zone with the exception of Southwestern Ontario which has a hot summer humid continental climate. Parts of Western Canada have a semi-arid climate, and parts of Vancouver Island can even be classified as a warm-summer Mediterranean climate. Temperature extremes in Canada range from in Midale and Yellow Grass, Saskatchewan, on July 5, 1937, to in Snag, Yukon, on February 3, 1947. Canada covers and a panoply of various geoclimatic regions, of which there are 8 main regions. Canada also encompasses vast maritime terrain, with the world's longest coastline of . The physical geography of Canada is widely varied. Boreal forests prevail throughout the country, ice is prominent in northerly Arctic regions and through the Rocky Mountains, and the relatively flat Canadian Prairies in the southwest facilitate productive agriculture. The Great Lakes feed the St. Lawrence River (in the southeast) where lowlands host much of Canada's population. The Appalachian mountain range extends from Alabama through the Gaspé Peninsula and the Atlantic Provinces, creating rolling hills indented by river valleys. It also runs through parts of southern Quebec. The Appalachian mountains (more specifically the Chic-Choc Mountains, Notre Dame, and Long Range Mountains) are an old and eroded range of mountains, approximately 380 million years in age. Notable mountains in the Appalachians include Mount Jacques-Cartier (Quebec, ), Mount Carleton (New Brunswick, ), The Cabox (Newfoundland, ). Parts of the Appalachians are home to a rich endemic flora and fauna and are considered to have been nunataks during the last glaciation era. The southern parts of Quebec and Ontario, in the section of the Great Lakes (bordered entirely by Ontario on the Canadian side) and St. Lawrence basin (often called St. Lawrence Lowlands), is another particularly rich sedimentary plain. Prior to its colonization and heavy urban sprawl of the 20th century, this Eastern Great Lakes lowland forests area was home to large mixed forests covering a mostly flat area of land between the Appalachian Mountains and the Canadian Shield. Most of this forest has been cut down through agriculture and logging operations, but the remaining forests are for the most part heavily protected. In this part of Canada begins one of the world's largest estuaries, the Estuary of Saint Lawrence (see Gulf of St. Lawrence lowland forests). While the relief of these lowlands is particularly flat and regular, a group of batholites known as the Monteregian Hills are spread along a mostly regular line across the area. The most notable are Montreal's Mount Royal and Mont Saint-Hilaire. These hills are known for a great richness in precious minerals. The northeastern part of Alberta, northern parts of Saskatchewan, Manitoba, Ontario, and Quebec, all of Labrador and the Great Northern Peninsula of Newfoundland, eastern mainland Northwest Territories, most of Nunavut's mainland and, of its Arctic Archipelago, Baffin Island and significant bands through Somerset, Southampton, Devon and Ellesmere islands are located on a vast rock base known as the Canadian Shield. The Shield mostly consists of eroded hilly terrain and contains many lakes and important rivers used for hydroelectric production, particularly in northern Quebec and Ontario. The shield also encloses an area of wetlands, the Hudson Bay lowlands. Some particular regions of the Shield are referred to as mountain ranges, including the Torngat and Laurentian Mountains. The Shield cannot support intensive agriculture, although there is subsistence agriculture and small dairy farms in many of the river valleys and around the abundant lakes, particularly in the southern regions. Boreal forest covers much of the shield, with a mix of conifers that provide valuable timber resources in areas such as the Central Canadian Shield forests ecoregion that covers much of Northern Ontario. The region is known for its extensive mineral reserves. The Canadian Shield is known for its vast minerals, such as emeralds, diamonds and copper. The Canadian shield is also called the mineral house. The Canadian Prairies are part of a vast sedimentary plain covering much of Alberta, southern Saskatchewan, and southwestern Manitoba, as well as much of the region between the Rocky Mountains and the Great Slave and Great Bear lakes in Northwest Territories. The plains generally describes the expanses of (largely flat) arable agricultural land which sustain extensive grain farming operations in the southern part of the provinces. Despite this, some areas such as the Cypress Hills and the Alberta Badlands are quite hilly and the prairie provinces contain large areas of forest such as the Mid-Continental Canadian forests. The size is roughly ~. The Canadian Cordillera, contiguous with the American cordillera, is bounded by the Rocky Mountains to the east and the Pacific Ocean to the west. The Canadian Rockies are part of a major continental divide that extends north and south through western North America and western South America. The Columbia and the Fraser Rivers have their headwaters in the Canadian Rockies and are the second and third largest rivers respectively to drain to the west coast of North America. To the west of their headwaters, across the Rocky Mountain Trench, is a second belt of mountains, the Columbia Mountains, comprising the Selkirk, Purcell, Monashee and Cariboo Mountains sub-ranges. Immediately west of the Columbia Mountains is a large and rugged Interior Plateau, encompassing the Chilcotin and Cariboo regions in central British Columbia (the Fraser Plateau), the Nechako Plateau further north, and also the Thompson Plateau in the south. The Peace River Valley in northeastern British Columbia is Canada's most northerly agricultural region, although it is part of the Prairies. The dry, temperate climate of the Okanagan Valley in south central British Columbia provides ideal conditions for fruit growing and a flourishing wine industry; the semi-arid belt of the Southern Interior also includes the Fraser Canyon, and Thompson, Nicola, Similkameen, Shuswap and Boundary regions and fruit-growing is common in these areas also, and also in the West Kootenay. Between the plateau and the coast is the province's largest mountain range, the Coast Mountains. The Coast Mountains contain some of the largest temperate-latitude icefields in the world. On the south coast of British Columbia, Vancouver Island is separated from the mainland by the continuous Juan de Fuca, Georgia, and Johnstone Straits. Those straits include a large number of islands, notably the Gulf Islands and Discovery Islands. North, near the Alaskan border, Haida Gwaii lies across Hecate Strait from the North Coast region and to its north, across Dixon Entrance from Southeast Alaska. Other than in the plateau regions of the Interior and its many river valleys, most of British Columbia is coniferous forest. The only temperate rain forests in Canada are found along the Pacific Coast in the Coast Mountains, on Vancouver Island, and on Haida Gwaii, and in the Cariboo Mountains on the eastern flank of the Plateau. The Western Cordillera continues northwards past the Liard River in northernmost British Columbia to include the Mackenzie and Selwyn Ranges which lie in the far western Northwest Territories and the eastern Yukon Territory. West of them is the large Yukon Plateau and, west of that, the Yukon Ranges and Saint Elias Mountains, which include Canada's and British Columbia's highest summits, Mount Saint Elias in the Kluane region and Mount Fairweather in the Tatshenshini-Alsek region. The headwaters of the Yukon River, the largest and longest of the rivers on the Pacific Slope, lie in northern British Columbia at Atlin and Teslin Lakes. Western Canada has many volcanoes and is part of the Pacific Ring of Fire, a system of volcanoes found around the margins of the Pacific Ocean. There are over 200 young volcanic centres that stretch northward from the Cascade Range to Yukon. They are grouped into five volcanic belts with different volcano types and tectonic settings. The Northern Cordilleran Volcanic Province was formed by faulting, cracking, rifting, and the interaction between the Pacific Plate and the North American Plate. The Garibaldi Volcanic Belt was formed by subduction of the Juan de Fuca Plate beneath the North American Plate. The Anahim Volcanic Belt was formed as a result of the North American Plate sliding westward over the Anahim hotspot. The Chilcotin Group is believed to have formed as a result of back-arc extension behind the Cascadia subduction zone. The Wrangell Volcanic Field formed as a result of subduction of the Pacific Plate beneath the North American Plate at the easternmost end of the Aleutian Trench. The volcanic eruption of the Tseax Cone in 1775 was among Canada's worst natural disasters, killing an estimated 2,000 Nisga'a people and destroying their village in the Nass River valley of northern British Columbia. The eruption produced a lava flow, and, according to Nisga'a legend, blocked the flow of the Nass River. Volcanism has also occurred in the Canadian Shield. It contains over 150 volcanic belts (now deformed and eroded down to nearly flat plains) that range from 600 million to 2.8 billion years old. Many of Canada's major ore deposits are associated with Precambrian volcanoes. There are pillow lavas in the Northwest Territories that are about 2.6 billion years old and are preserved in the Cameron River Volcanic Belt. The pillow lavas in rocks over 2 billion years old in the Canadian Shield signify that great oceanic volcanoes existed during the early stages of the formation of the Earth's crust. Ancient volcanoes play an important role in estimating Canada's mineral potential. Many of the volcanic belts bear ore deposits that are related to the volcanism. While the largest part of the Canadian Arctic is composed of seemingly endless permafrost and tundra north of the tree line, it encompasses geological regions of varying types: the Arctic Cordillera (with the British Empire Range and the United States Range on Ellesmere Island) contains the northernmost mountain system in the world. The Arctic Lowlands and Hudson Bay lowlands comprise a substantial part of the geographic region often designated as the Canadian Shield (in contrast to the sole geologic area). The ground in the Arctic is mostly composed of permafrost, making construction difficult and often hazardous, and agriculture virtually impossible. The Arctic, when defined as everything north of the tree line, covers most of Nunavut and the northernmost parts of Northwest Territories, Yukon, Manitoba, Ontario, Quebec, and Labrador. Canada holds vast reserves of water: its rivers discharge nearly 9% of the world's renewable water supply, it contains a quarter of the world's wetlands, and it has the third largest amount of glaciers (after Antarctica and Greenland). Because of extensive glaciation, Canada hosts more than two million lakes: of those that are entirely within Canada, more than 31,000 are between in area, while 563 are larger than . Canada's two longest rivers are the Mackenzie, which empties into the Arctic Ocean and drains a large part of northwestern Canada, and the St. Lawrence, which drains the Great Lakes and empties into the Gulf of St. Lawrence. The Mackenzie is over in length while the St. Lawrence is over in length. Rounding out the ten longest rivers within Canada are the Nelson, Churchill, Peace, Fraser, North Saskatchewan, Ottawa, Athabasca and Yukon rivers. The Atlantic watershed drains the entirety of the Atlantic provinces (parts of the Quebec-Labrador border are fixed at the Atlantic Ocean-Arctic Ocean continental divide), most of inhabited Quebec and large parts of southern Ontario. It is mostly drained by the economically important St. Lawrence River and its tributaries, notably the Saguenay, Manicouagan and Ottawa rivers. The Great Lakes and Lake Nipigon are also drained by the St. Lawrence. The Churchill River and Saint John River are other important elements of the Atlantic watershed in Canada. The Hudson Bay watershed drains over a third of Canada. It covers Manitoba, northern Ontario and Quebec, most of Saskatchewan, southern Alberta, southwestern Nunavut and the southern half of Baffin Island. This basin is most important in fighting drought in the prairies and producing hydroelectricity, especially in Manitoba, northern Ontario and Quebec. Major elements of this watershed include Lake Winnipeg, Nelson River, the North Saskatchewan and South Saskatchewan Rivers, Assiniboine River, and Nettilling Lake on Baffin Island. Wollaston Lake lies on the boundary between the Hudson Bay and Arctic Ocean watersheds and drains into both. It is the largest lake in the world that naturally drains in two directions. The continental divide in the Rockies separates the Pacific watershed in British Columbia and Yukon from the Arctic and Hudson Bay watersheds. This watershed irrigates the agriculturally important areas of inner British Columbia (such as the Okanagan and Kootenay valleys), and is used to produce hydroelectricity. Major elements are the Yukon, Columbia and Fraser rivers. The northern parts of Alberta, Manitoba and British Columbia, most of Northwest Territories and Nunavut, and parts of Yukon are drained by the Arctic watershed. This watershed has been little used for hydroelectricity, with the exception of the Mackenzie River, the longest river in Canada. The Peace, Athabasca and Liard Rivers, as well as Great Bear Lake and Great Slave Lake (respectively the largest and second largest lakes wholly enclosed by Canada) are significant elements of the Arctic watershed. Each of these elements eventually merges with the Mackenzie, thereby draining the vast majority of the Arctic watershed. The southernmost part of Alberta drains into the Gulf of Mexico through the Milk River and its tributaries. The Milk River originates in the Rocky Mountains of Montana, then flows into Alberta, then returns into the United States, where it is drained by the Missouri River. A small area of southwestern Saskatchewan is drained by Battle Creek, which empties into the Milk River. Canada has produced a Biodiversity Action Plan in response to the 1992 international accord; the plan addresses conservation of endangered species and certain habitats. The main biomes of Canada are: Canada is divided into ten provinces and three territories. According to Statistics Canada, 72.0 percent of the population is concentrated within of the nation's southern border with the United States, 70.0% live south of the 49th parallel, and over 60 percent of the population lives along the Great Lakes and St. Lawrence River between Windsor, Ontario, and Quebec City. This leaves the vast majority of Canada's territory as sparsely populated wilderness; Canada's population density is 3.5 people/km2 (9.1/mi2), among the lowest in the world. Despite this, 79.7 percent of Canada's population resides in urban areas, where population densities are increasing. Canada shares with the U.S. the world's longest binational border at ; are with Alaska. The Danish island dependency of Greenland lies to Canada's northeast, separated from the Canadian Arctic islands by Baffin Bay and Davis Strait. The French islands of Saint Pierre and Miquelon lie off the southern coast of Newfoundland in the Gulf of St. Lawrence and have a maritime territorial enclave within Canada's exclusive economic zone. Canada's geographic proximity to the United States has historically bound the two countries together in the political world as well. Canada's position between the Soviet Union (now Russia) and the U.S. was strategically important during the Cold War since the route over the North Pole and Canada was the fastest route by air between the two countries and the most direct route for intercontinental ballistic missiles. Since the end of the Cold War, there has been growing speculation that Canada's Arctic maritime claims may become increasingly important if global warming melts the ice enough to open the Northwest Passage. Canada's abundance of natural resources is reflected in their continued importance in the economy of Canada. Major resource-based industries are fisheries, forestry, agriculture, petroleum products and mining. The fisheries industry has historically been one of Canada's strongest. Unmatched cod stocks on the Grand Banks of Newfoundland launched this industry in the 16th century. Today these stocks are nearly depleted, and their conservation has become a preoccupation of the Atlantic Provinces. On the West Coast, tuna stocks are now restricted. The less depleted (but still greatly diminished) salmon population continues to drive a strong fisheries industry. Canada claims of territorial sea, a contiguous zone of , an exclusive economic zone of with and a continental shelf of or to the edge of the continental margin. Forestry has long been a major industry in Canada. Forest products contribute to one fifth of the nation's exports. The provinces with the largest forestry industries are British Columbia, Ontario and Quebec. Fifty-four percent of Canada's land area is covered in forest. The boreal forests account for four-fifths of Canada's forestland. Five per cent of Canada's land area is arable, none of which is for permanent crops. Three per cent of Canada's land area is covered by permanent pastures. Canada has 7,200 square kilometres (2,800 mi2) of irrigated land (1993 estimate). Agricultural regions in Canada include the Canadian Prairies, the Lower Mainland and various regions within the Interior of British Columbia, the St. Lawrence Basin and the Canadian Maritimes. Main crops in Canada include flax, oats, wheat, maize, barley, sugar beets and rye in the prairies; flax and maize in Western Ontario; Oats and potatoes in the Maritimes. Fruit and vegetables are grown primarily in the Annapolis Valley of Nova Scotia, Southwestern Ontario, the Golden Horseshoe region of Ontario, along the south coast of Georgian Bay and in the Okanagan Valley of British Columbia. Cattle and sheep are raised in the valleys and plateaus of British Columbia. Cattle, sheep and hogs are raised on the prairies, cattle and hogs in Western Ontario, sheep and hogs in Quebec, and sheep in the Maritimes. There are significant dairy regions in central Nova Scotia, southern New Brunswick, the St. Lawrence Valley, northeastern Ontario, southwestern Ontario, the Red River valley of Manitoba and the valleys in the British Columbia Interior, on Vancouver Island and in the Lower Mainland. Fossil fuels are a more recently developed resource in Canada, with oil and gas being extracted from deposits in the Western Canadian Sedimentary Basin since the mid 1900s. While Canada's crude oil deposits are fewer, technological developments in recent decades have opened up oil production in Alberta's Oil Sands to the point where Canada now has some of the largest reserves of oil in the world. In other forms, Canadian industry has a long history of extracting large coal and natural gas reserves. Canada's mineral resources are diverse and extensive. Across the Canadian Shield and in the north there are large iron, nickel, zinc, copper, gold, lead, molybdenum, and uranium reserves. Large diamond concentrations have been recently developed in the Arctic, making Canada one of the world's largest producers. Throughout the Shield there are many mining towns extracting these minerals. The largest, and best known, is Sudbury, Ontario. Sudbury is an exception to the normal process of forming minerals in the Shield since there is significant evidence that the Sudbury Basin is an ancient meteorite impact crater. The nearby, but less known Temagami Magnetic Anomaly has striking similarities to the Sudbury Basin. Its magnetic anomalies are very similar to the Sudbury Basin, and so it could be a second metal-rich impact crater. The Shield is also covered by vast boreal forests that support an important logging industry. Canada's many rivers have afforded extensive development of hydroelectric power. Extensively developed in British Columbia, Ontario, Quebec and Labrador, the many dams have long provided a clean, dependable source of energy. Continuous permafrost in the north is a serious obstacle to development. Cyclonic storms form east of the Rocky Mountains, a result of the mixing of air masses from the Arctic, Pacific, and North American interior, and produce most of the country's rain and snow east of the mountains. Air pollution and resulting acid rain severely affects lakes and damages forests. Metal smelting, coal-burning utilities, and vehicle emissions impact agricultural and forest productivity. Ocean waters are also becoming contaminated by agricultural, industrial, mining, and forestry activities. Global climate change and the warming of the polar region will likely cause significant changes to the environment, including loss of the polar bear, the exploration for resource then the extraction of these resources and an alternative transport route to the Panama Canal through the Northwest Passage. The northernmost point of land within the boundaries of Canada is Cape Columbia, Ellesmere Island, Nunavut . The northernmost point of the Canadian mainland is Zenith Point on Boothia Peninsula, Nunavut . The southernmost point is Middle Island, in Lake Erie, Ontario (41°41′N 82°40′W); the southernmost water point lies just south of the island, on the Ontario–Ohio border (41°40′35″N). The southernmost point of the Canadian mainland is Point Pelee, Ontario . The westernmost point is Boundary Peak 187 (60°18′22.929″N 141°00′7.128″W) at the southern end of the Yukon–Alaska border, which roughly follows 141°W but leans very slightly east as it goes North . The easternmost point is Cape Spear, Newfoundland (47°31′N 52°37′W) . The easternmost point of the Canadian mainland is Elijah Point, Cape St. Charles, Labrador (52°13′N 55°37′W) . The lowest point is sea level at 0 m, whilst the highest point is Mount Logan, Yukon, at 5,959 m / 19,550 ft . The Canadian pole of inaccessibility is allegedly near Jackfish River, Alberta (59°2′N 112°49′W). The furthest straight-line distance that can be travelled to Canadian points of land is between the southwest tip of Kluane National Park and Reserve (next to Mount Saint Elias) and Cripple Cove, Newfoundland (near Cape Race) at a distance of .
https://en.wikipedia.org/wiki?curid=5192
Demographics of Canada Statistics Canada conducts a country-wide census that collects demographic data every five years on the first and sixth year of each decade. The 2016 Canadian Census enumerated a total population of 35,151,728, an increase of around 5.0 percent over the 2011 figure. Between 2011 and May 2016, Canada's population grew by 1.7 million people, with immigrants accounting for two-thirds of the increase. Between 1990 and 2008, the population increased by 5.6 million, equivalent to 20.4 percent overall growth. The main drivers of population growth are immigration and, to a lesser extent, natural growth. Canada has one of the highest per-capita immigration rates in the world, driven mainly by economic policy and, to a lesser extent, family reunification. In 2019, a total of 341,180 immigrants were admitted to Canada, mainly from Asia. New immigrants settle mostly in major urban areas such as Toronto, Montreal and Vancouver. Canada also accepts large numbers of refugees, accounting for over 10 percent of annual global refugee resettlements. The Canada 2016 Census had a total population count of 35,151,728 individuals, making up approximately 0.5% of the world's total population. According to Organisation for Economic Co-operation and Development (OECD)/World Bank, the population in Canada increased from 1990 to 2008 with 5.6 million and 20.4% growth in population, compared to 21.7% growth in the United States and 31.2% growth in Mexico. According to the OECD/World Bank population statistics, for the same period the world population growth was 27%, a total of 1,423 million people. However, over the same period, the population of France grew by 8.0%. And from 1991 to 2011, the population of the UK increased by 10.0%. The total fertility rate is the number of children born per woman. Source: Statistics Canada. Sources: Our World In Data and the United Nations. 1831–1911 1921–1950 1950–2015 Source: "UN World Population Prospects" 58.9% of Canadians reported being members of a single ethnic group in the 2016 Census. 31.7% of them stated “Canadian” as their single ethnic origin, followed by Chinese (7.1%), English (5.4%), East Indian (5.4%), French (5.0%), Italian (3.4%), Filipino (3.2%), German (2.8%), First Nations (North American Indian) (2.6%), Scottish (2.3%), and Irish (2.3%). Demographic statistics according to the World Population Review in 2019. Demographic statistics according to the CIA World Factbook, unless otherwise indicated. Total: 40.6 Canadian 32.3%, English 18.3%, Scottish 13.9%, French 13.6%, Irish 13.4%, German 9.6%, Chinese 5.1%, Italian 4.6%, North American Indian 4.4%, East Indian 4%, other 51.6% (2016 est.) Note: percentages add up to more than 100% because respondents were able to identify more than one ethnic origin (2016 est.) Canadian 32.2%, English 19.8%, French 15.5%, Scottish 14.4%, Irish 13.8%, German 9.8%, Italian 4.5%, Chinese 4.5%, North American Indian 4.2%, other 50.9% Note: percentages add up to more than 100% because respondents were able to identify more than one ethnic origin (2011 est.) English (official) 58.7%, French (official) 22%, Punjabi 1.4%, Italian 1.3%, Spanish 1.3%, German 1.3%, Cantonese 1.2%, Tagalog 1.2%, Arabic 1.1%, other 10.5% (2011 est.) vast majority of Canadians are positioned in a discontinuous band within approximately 300 km of the southern border with the United States; the most populated province is Ontario, followed by Quebec and British Columbia Catholic 39% (includes Roman Catholic 38.8%, other Catholic .2%), Protestant 20.3% (includes United Church 6.1%, Anglican 5%, Baptist 1.9%, Lutheran 1.5%, Pentecostal 1.5%, Presbyterian 1.4%, other Protestant 2.9%), Orthodox 1.6%, other Christian 6.3%, Muslim 3.2%, Hindu 1.5%, Sikh 1.4%, Buddhist 1.1%, Jewish 1%, other 0.6%, none 23.9% (2011 est.) Sex ratio: As data is completely self-reported, and reporting individuals may have varying definitions of "Ethnic origin" (or may not know their ethnic origin), these figures should not be considered an exact record of the relative prevalence of different ethno-cultural ancestries but rather how Canadians self-identify. Statistics Canada projects that immigrants will represent between 24.5% and 30.0% of Canada's population in 2036, compared with 20.7% in 2011. Statistics Canada further projects that visible minorities among the working-age population (15 to 64 years) will make up 33.7–34.3% of Canada's total population, compared to 22.3% in 2016. Counting both single and multiple responses, the most commonly identified ethnic origins were (2016): The most common ethnic origins per province are as follows in 2006 (total responses; only percentages 10% or higher shown; ordered by percentage of "Canadian"): Bold indicates either that this response is dominant within this province, or that this province has the highest ratio (percentage) of this response among provinces. "Note: Inuit, other Aboriginal and mixed Aboriginal groups are not listed as their own, but they are all accounted for in total Aboriginal" All statistics are from the Canada 2011 Census. Language used most often at work: Languages by language used most often at home: Languages by mother tongue: Statistics Canada (StatCan) grouped responses to the 2011 National Household Survey (NHS) question on religion into nine core religious categories – Buddhist, Christian, Hindu, Jewish, Muslim, Sikh, Traditional (Aboriginal) Spirituality, other religions and no religious affiliation. Among these, of Canadians were self-identified as Christians in 2011. The second, third, and fourth-largest categories were of Canadians with no religious affiliation at , Canadian Muslims at , and Canadian Hindus at . Within the 2011 NHS results, StatCan further subcategorized Christianity in nine groups of its own – Anglican, Baptist, Catholic, Christian Orthodox, Lutheran, Pentecostal, Presbyterian, United Church and Other Christian. Among these, of Canadians were self-identified as Catholic in 2011. The second and third-largest ungrouped subcategories of Christian Canadians were United at and Anglican at , while of Christians were grouped into the Other Christian subcategory comprising numerous denominations. Of the 3,036,785 or of Canadians identified as Other Christians:
https://en.wikipedia.org/wiki?curid=5193
Politics of Canada The politics of Canada function within a framework of parliamentary democracy and a federal system of parliamentary government with strong democratic traditions. Canada is a constitutional monarchy, in which the monarch is head of state. In practice, the executive powers are directed by the Cabinet, a committee of ministers of the Crown responsible to the elected House of Commons of Canada and chosen and headed by the Prime Minister of Canada. Canada is described as a "full democracy", with a tradition of liberalism, and an egalitarian, moderate political ideology. Far-right and far-left politics have never been a prominent force in Canadian society. Peace, order, and good government, alongside an implied bill of rights are founding principles of the Canadian government. An emphasis on social justice has been a distinguishing element of Canada's political culture. Canada has placed emphasis on equality and inclusiveness for all its people. The country has a multi-party system in which many of its legislative practices derive from the unwritten conventions of and precedents set by the Westminster parliament of the United Kingdom. The two dominant political parties in Canada have historically been the Liberal Party of Canada and the Conservative Party of Canada (or its predecessors). Smaller parties like the New Democratic Party, the Quebec nationalist Bloc Québécois and the Green Party of Canada have also been able to exert their own influence over the political process. Canada has evolved variations: party discipline in Canada is stronger than in the United Kingdom, and more parliamentary votes are considered motions of confidence, which tends to diminish the role of non-Cabinet members of parliament (MPs). Such members, in the government caucus, and junior or lower-profile members of opposition caucuses, are known as backbenchers. Backbenchers can, however, exert their influence by sitting in parliamentary committees, like the Public Accounts Committee or the National Defence Committee. Canada's governmental structure was originally established by the British Parliament through the "British North America Act" (now known as the "Constitution Act, 1867"), but the federal model and division of powers were devised by Canadian politicians. Particularly after World War I, citizens of the self-governing Dominions, such as Canada, began to develop a strong sense of identity, and, in the Balfour Declaration of 1926, the British government expressed its intent to grant full autonomy to these regions. Thus in 1931, the British Parliament passed the Statute of Westminster, giving legal recognition to the autonomy of Canada and other Dominions. Following this, Canadian politicians were unable to obtain consensus on a process for amending the constitution until 1982, meaning amendments to Canada's constitution continued to require the approval of the British parliament until that date. Similarly, the Judicial Committee of the Privy Council in Britain continued to make the final decision on criminal appeals until 1933 and on civil appeals until 1949. Canada's egalitarian approach to governance has emphasized social welfare, economic freedom, and multiculturalism, which is based on selective economic migrants, social integration, and suppression of far-right politics, that has wide public and political support. Its broad range of constituent nationalities and policies that promote a "just society" are constitutionally protected. Individual rights, equality and inclusiveness (social equality) have risen to the forefront of political and legal importance for most Canadians, as demonstrated through support for the Charter of Rights and Freedoms, a relatively free economy, and social liberal attitudes toward women's rights (like pregnancy termination), homosexuality, euthanasia or cannabis use. There is also a sense of collective responsibility in Canadian political culture, as is demonstrated in general support for universal health care, multiculturalism, gun control, foreign aid, and other social programs. At the federal level, Canada has been dominated by two relatively centrist parties practicing "brokerage politics", the centre-left Liberal Party of Canada and the centre-right Conservative Party of Canada. The historically predominant Liberals position themselves at the centre of the political scale, with the Conservatives sitting on the right and the New Democratic Party occupying the left. Five parties had representatives elected to the federal parliament in the 2019 election: the Liberal Party who currently form the government, the Conservative Party who are the Official Opposition, the New Democratic Party, the Bloc Québécois, and the Green Party of Canada. The bicameral Parliament of Canada consists of three parts: the monarch, the Senate, and the House of Commons. Currently, the Senate, which is frequently described as providing "regional" representation, has 105 members appointed by the Governor-General on the advice of the Prime Minister to serve until age 75. It was created with equal representation from each of Ontario, Quebec, the Maritime region and the Western Provinces. However, it is currently the product of various specific exceptions, additions and compromises, meaning that regional equality is not observed, nor is representation-by-population. The normal number of senators can be exceeded by the monarch on the advice of the Prime Minister, as long as the additional senators are distributed equally with regard to region (up to a total of eight additional Senators). This power of additional appointment has only been used once, when Prime Minister Brian Mulroney petitioned Queen Elizabeth II to add eight seats to the Senate so as to ensure the passage of the Goods and Services Tax legislation. The House of Commons currently has 338 members elected in single-member districts in a plurality voting system (first past the post), meaning that members must attain only a plurality (the most votes of any candidate) rather than a majority (50 percent plus one). The electoral districts are also known as ridings. Mandates cannot exceed five years; an election must occur by the end of this time. This fixed mandate has been exceeded only once, when Prime Minister Robert Borden perceived the need to do so during World War I. The size of the House and apportionment of seats to each province is revised after every census, conducted every five years, and is based on population changes and approximately on representation-by-population. Canadians vote for their local Member of Parliament (MP) only. An MP need not be a member of any political party: such MPs are known as independents. When a number of MPs share political opinions they may form a body known as a political party. The "Canada Elections Act" defines a political party as "an organization one of whose fundamental purposes is to participate in public affairs by endorsing one or more of its members as candidates and supporting their election." Forming and registering a federal political party are two different things. There is no legislation regulating the formation of federal political parties. Elections Canada cannot dictate how a federal political party should be formed or how its legal, internal and financial structures should be established. Parties elect their leaders in run-off elections to ensure that the winner receives more than 50% of the votes. Normally the party leader stands as a candidate to be an MP during an election. Canada's parliamentary system empowers political parties and their party leaders. Where one party gets a majority of the seats in the House of Commons, that party is said to have a "majority government." Through party discipline, the party leader, who is elected in only one riding, exercises a great deal of control over the cabinet and the parliament. Historically the prime minister and senators are selected by the governor general as a representative of the Queen, though in modern practice the monarch's duties are ceremonial. Consequently, the prime minister, while technically selected by the governor general, is for all practical purposes selected by the party with the majority of seats. That is, the party that gets the most seats normally forms the government, with that party's leader becoming prime minister. The prime minister is not directly elected by the general population, although the prime minister is almost always directly elected as an MP within his or her constituency. Again senators while technically selected at the pleasure of the monarch, are ceremonially selected by the governor general at the advice (and for most practical purposes authority) of the prime minister. A minority government situation occurs when the party that holds the most seats in the House of Commons holds fewer seats than the opposition parties combined. In this scenario usually the party leader whose party has the most seats in the House is selected by the governor general to lead the government, however, to create stability, the leader chosen must have the support of the majority of the House, meaning they need the support of at least one other party. In Canada, the provinces are considered co-sovereign; sovereignty of the provinces is passed on, not by the Governor General or the Canadian parliament, but through the Crown itself. This means that the Crown is "divided" into 11 legal jurisdictions or 11 "Crowns" – one federal (the Crown in right of Canada), and ten provincial (an example being the Crown in right of British Columbia). Federal-provincial (or intergovernmental, formerly Dominion-provincial) relations is a regular issue in Canadian politics: Quebec wishes to preserve and strengthen its distinctive nature, western provinces desire more control over their abundant natural resources, especially energy reserves; industrialized Central Canada is concerned with its manufacturing base, and the Atlantic provinces strive to escape from being less affluent than the rest of the country. In order to ensure that social programs such as health care and education are funded consistently throughout Canada, the "have-not" (poorer) provinces receive a proportionately greater share of federal "transfer (equalization) payments" than the richer, or "have", provinces do; this has been somewhat controversial. The richer provinces often favour freezing transfer payments, or rebalancing the system in their favour, based on the claim that they already pay more in taxes than they receive in federal government services, and the poorer provinces often favour an increase on the basis that the amount of money they receive is not sufficient for their existing needs. Particularly in the past decade, some scholars have argued that the federal government's exercise of its unlimited constitutional spending power has contributed to strained federal-provincial relations. This power, which allows the federal government to spend the revenue it raises in any way that it pleases, allows it to overstep the constitutional division of powers by creating programs that encroach on areas of provincial jurisdiction. The federal spending power is not expressly set out in the "Constitution Act, 1867"; however, in the words of the Court of Appeal for Ontario the power "can be inferred" from s. 91(1A), "the public debt and property". A prime example of an exercise of the spending power is the "Canada Health Act", which is a conditional grant of money to the provinces. Regulation of health services is, under the Constitution, a provincial responsibility. However, by making the funding available to the provinces under the "Canada Health Act" contingent upon delivery of services according to federal standards, the federal government has the ability to influence health care delivery. This spending power, coupled with Supreme Court rulings—such as Reference re Canada Assistance Plan (B.C.)—that have held that funding delivered under the spending power can be reduced unilaterally at any time, has contributed to strained federal-provincial relations. Except for three short-lived transitional or minority governments, prime ministers from Quebec led Canada continuously from 1968 to early 2006. Québécois led both Liberal and Progressive Conservative governments in this period. Monarchs, governors general, and prime ministers are now expected to be at least functional, if not fluent, in both English and French. In selecting leaders, political parties give preference to candidates who are fluently bilingual. Also, by law, three of the nine positions on the Supreme Court of Canada must be held by judges from Quebec. This representation makes sure that at least three judges have sufficient experience with the civil law system to treat cases involving Quebec laws. Canada has a long and storied history of secessionist movements (see Secessionist movements of Canada). National unity has been a major issue in Canada since the forced union of Upper and Lower Canada in 1840. The predominant and lingering issue concerning Canadian national unity has been the ongoing conflict between the French-speaking majority in Quebec and the English-speaking majority in the rest of Canada. Quebec's continued demands for recognition of its "distinct society" through special political status has led to attempts for constitutional reform, most notably with the failed attempts to amend the constitution through the Meech Lake Accord and the Charlottetown Accord (the latter of which was rejected through a national referendum). Since the Quiet Revolution, sovereigntist sentiments in Quebec have been variably stoked by the patriation of the Canadian constitution in 1982 (without Quebec's consent) and by the failed attempts at constitutional reform. Two provincial referenda, in 1980 and 1995, rejected proposals for sovereignty with majorities of 60% and 50.6% respectively. Given the narrow federalist victory in 1995, a reference was made by the Chrétien government to the Supreme Court of Canada in 1998 regarding the legality of unilateral provincial secession. The court decided that a unilateral declaration of secession would be unconstitutional. This resulted in the passage of the "Clarity Act" in 2000. The Bloc Québécois, a sovereigntist party which runs candidates exclusively in Quebec, was started by a group of MPs who left the Progressive Conservative (PC) party (along with several disaffected Liberal MPs), and first put forward candidates in the 1993 federal election. With the collapse of the PCs in that election, the Bloc and Liberals were seen as the only two viable parties in Quebec. Thus, prior to the 2006 election, any gain by one party came at the expense of the other, regardless of whether national unity was really at issue. The Bloc, then, benefited (with a significant increase in seat total) from the impressions of corruption that surrounded the Liberal Party in the lead-up to the 2004 election. However, the newly unified Conservative party re-emerged as a viable party in Quebec by winning 10 seats in the 2006 election. In the 2011 election, the New Democratic Party succeeded in winning 59 of Quebec's 75 seats, successfully reducing the number of seats of every other party substantially. The NDP surge nearly destroyed the Bloc, reducing them to 4 seats, far below the minimum requirement of 12 seats for Official party status. Newfoundland and Labrador is also a problem regarding national unity. As the Dominion of Newfoundland was a self-governing country equal to Canada until 1949, there are large, though unco-ordinated, feelings of Newfoundland nationalism and anti-Canadian sentiment among much of the population. This is due in part to the perception of chronic federal mismanagement of the fisheries, forced resettlement away from isolated settlements in the 1960s, the government of Quebec still drawing inaccurate political maps whereby they take parts of Labrador, and to the perception that mainland Canadians look down upon Newfoundlanders. In 2004, the Newfoundland and Labrador First Party contested provincial elections and in 2008 in federal ridings within the province. In 2004, then-premier Danny Williams ordered all federal flags removed from government buildings as a result of lost offshore revenues to equalization clawbacks. On December 23, 2004, premier Williams made this statement to reporters in St. John's, Western alienation is another national-unity-related concept that enters into Canadian politics. Residents of the four western provinces, particularly Alberta, have often been unhappy with a lack of influence and a perceived lack of understanding when residents of Central Canada consider "national" issues. While this is seen to play itself out through many avenues (media, commerce, and so on.), in politics, it has given rise to a number of political parties whose base constituency is in western Canada. These include the United Farmers of Alberta, who first won federal seats in 1917, the Progressives (1921), the Social Credit Party (1935), the Co-operative Commonwealth Federation (1935), the Reconstruction Party (1935), New Democracy (1940) and most recently the Reform Party (1989). The Reform Party's slogan "The West Wants In" was echoed by commentators when, after a successful merger with the PCs, the successor party to both parties, the Conservative Party won the 2006 election. Led by Stephen Harper, who is an MP from Alberta, the electoral victory was said to have made "The West IS In" a reality. However, regardless of specific electoral successes or failures, the concept of western alienation continues to be important in Canadian politics, particularly on a provincial level, where opposing the federal government is a common tactic for provincial politicians. For example, in 2001, a group of prominent Albertans produced the Alberta Agenda, urging Alberta to take steps to make full use of its constitutional powers, much as Quebec has done. Canada is considered by most sources to be a very stable democracy. In 2006, "The Economist" ranked Canada the third-most democratic nation in its Democracy Index, ahead of all other nations in the Americas and ahead of every nation more populous than itself. In 2008, Canada was ranked World No. 11 and again ahead of all countries more populous and ahead of other states in the Americas. (In 2008, the United States was ranked World No. 18, Uruguay World No. 23, and Costa Rica World No. 27.) The Liberal Party of Canada, under the leadership of Paul Martin, won a minority victory in the June 2004 general elections. In December 2003, Martin had succeeded fellow Liberal Jean Chrétien, who had, in 2000, become the first prime minister to lead three consecutive majority governments since 1945. However, in 2004 the Liberals lost seats in Parliament, going from 172 of 301 parliamentary seats to 135 of 308, and from 40.9% to 36.7% in the popular vote. The Canadian Alliance, which did well in western Canada in the 2000 election but was unable to make significant inroads in the East, merged with the Progressive Conservative Party to form the Conservative Party of Canada in late 2003. They proved to be moderately successful in the 2004 campaign, gaining seats from a combined Alliance-PC total of 78 in 2000 to 99 in 2004. However, the new Conservatives lost in popular vote, going from 37.7% in 2000 down to 29.6%. In 2006, the Conservatives, led by Stephen Harper, won a minority government with 124 seats. They improved their percentage from 2004, garnering 36.3% of the vote. During this election, the Conservatives also made major breakthroughs in Quebec. They gained 10 seats here, whereas in 2004 they had no seats. At the 2011 federal election, the Conservatives won a majority government with 167 seats. For the first time, the NDP became the Official Opposition, with 102 seats; the Liberals finished in third place with 34 seats. This was the first election in which the Green Party won a seat, that of leader Elizabeth May; the Bloc won 4 seats, losing official party status. The Liberal Party, after dominating Canadian politics since the 1920s, was in decline in early years of the 21st century. As Lang (2010) concluded, they lost their majority in Parliament in the 2004 election, were defeated in 2006, and in 2008 became little more than a "rump", falling to their lowest seat count in decades and a mere 26% of the popular vote. Furthermore, said Lang (a Liberal himself), its prospects "are as bleak as they have ever been." In the 2011 election, the Liberals suffered a crushing defeat, managing to secure only 18.9% of the vote share and only 34 seats. As a result, the Liberals lost their status as official opposition to the NDP. In explaining those trends, Behiels (2010) synthesized major studies and reported that "a great many journalists, political advisors, and politicians argue that a new political party paradigm is emerging" She claimed they saw a new power configuration based on a right-wing political party capable of sharply changing the traditional role of the state (federal and provincial) in the twenty-first-century. Behiels said that, unlike Brian Mulroney who tried but failed to challenge the long-term dominance of the Liberals, Harper's attempt had proven to be more determined, systematic and successful. Many commentators thought it signalled a major realignment. The "Economist" said, "the election represents the biggest realignment of Canadian politics since 1993." Lawrence Martin, commentator for the "Globe and Mail" said, "Harper has completed a remarkable reconstruction of a Canadian political landscape that endured for more than a century. The realignment saw both old parties of the moderate middle, the Progressive Conservatives and the Liberals, either eliminated or marginalized." "Maclean's" said, the election marked "an unprecedented realignment of Canadian politics" as "the Conservatives are now in a position to replace the Liberals as the natural governing party in Canada." Despite the grim outlook and poor early poll numbers, when the 2015 election was held, the Liberals under Justin Trudeau had an unprecedented comeback and the realignment was proved only temporary. Gaining 148 seats, they won a majority government for the first time since 2000. The "Toronto Star" claimed the comeback was "headed straight for the history books" and that Harper's name would "forever be joined with that of his Liberal nemesis in Canada’s electoral annals". Spencer McKay for the "National Post" suggested that "maybe we’ve witnessed a revival of Canada’s 'natural governing party'". Funding changes were made to ensure greater reliance on personal contributions. Personal donations to federal parties and campaigns benefit from tax credits, although the amount of tax relief depends on the amount given. Also only people paying income taxes receive any benefit from this. A good part of the reasoning behind the change in funding was that union or business funding should not be allowed to have as much impact on federal election funding as these are not contributions from citizens and are not evenly spread out between parties. They are still allowed to contribute to the election but only in a minor fashion. The new rules stated that a party had to receive 2% of the vote nationwide in order to receive the general federal funding for parties. Each vote garnered a certain dollar amount for a party (approximately $1.75) in future funding. For the initial disbursement, approximations were made based on previous elections. The NDP received more votes than expected (its national share of the vote went up) while the new Conservative Party of Canada received fewer votes than had been estimated and was asked to refund the difference. Quebec was the first province to implement a similar system of funding many years before the changes to funding of federal parties. Federal funds are disbursed quarterly to parties, beginning at the start of 2005. For the moment, this disbursement delay leaves the NDP and the Green Party in a better position to fight an election, since they rely more on individual contributors than federal funds. The Green Party now receives federal funds, since it for the first time received a sufficient share of the vote in the 2004 election. In 2007, news emerged of a funding loophole that "could cumulatively exceed the legal limit by more than $60,000," through anonymous recurrent donations of $200 to every riding of a party from corporations or unions. At the time, for each individual, the legal annual donation limit was $1,100 for each party, $1,100 combined total for each party's associations, and in an election year, an additional $1,100 combined total for each party's candidates. All three limits increase on 1 April every year based on the inflation rate. "Ordered by number of elected representatives in the House of Commons" Leaders’ debates in Canada consist of two debates, one English and one French, both produced by a consortium of Canada's five major television broadcasters (CBC/SRC, CTV, Global and TVA) and usually consist of the leaders of all parties with representation in the House of Commons. These debates air on the networks of the producing consortium as well as the public affairs and parliamentary channel CPAC and the American public affairs network C-SPAN. The highest court in Canada is the Supreme Court of Canada and is the final court of appeal in the Canadian justice system. The court is composed of nine judges: eight Puisne Justices and the Chief Justice of Canada. Justices of the Supreme Court of Canada are appointed by the Governor-in-Council. The "Supreme Court Act" limits eligibility for appointment to persons who have been judges of a superior court, or members of the bar for ten or more years. Members of the bar or superior judge of Quebec, by law, must hold three of the nine positions on the Supreme Court of Canada. The Canadian government operates the public service using departments, smaller agencies (for example, commissions, tribunals, and boards), and crown corporations. There are two types of departments: central agencies such as Finance, Privy Council Office, and Treasury Board Secretariat have an organizing and oversight role for the entire public service; line departments are departments that perform tasks in a specific area or field, such as the departments of Agriculture, Environment, or Defence. Scholar Peter Aucoin, writing about the Canadian Westminster system, raised concerns in the early 2000s about the centralization of power; an increased number, role and influence of partisan-political staff; personal-politicization of appointments to the senior public service; and the assumption that the public service is promiscuously partisan for the government of the day. In 1967, Canada established a point-based system to determine if immigrants should be eligible to enter the country, using meritorious qualities such as the applicant's ability to speak both French and English, their level of education, and other details that may be expected of someone raised in Canada. This system was considered ground-breaking at the time since prior systems were slanted on the basis of ethnicity. However, many foreign nationals still found it challenging to secure work after emigrating, resulting in a higher unemployment rate amongst the immigrant population. After winning power at the 2006 federal election, the Conservative Party sought to curb this issue by placing weight on whether or not the applicant has a standing job offer in Canada. The change has been a source of some contention as opponents argue that businesses use this change to suppress wages, with corporate owners leveraging the knowledge that an immigrant should hold a job to successfully complete the immigration process.
https://en.wikipedia.org/wiki?curid=5194
Economy of Canada The economy of Canada is a highly developed market economy. It is the 10th largest GDP by nominal and 16th largest GDP by PPP in the world. As with other developed nations, the country's economy is dominated by the service industry which employs about three quarters of Canadians. Canada has the third highest total estimated value of natural resources, valued at US$33.2 trillion in 2019. It has the world's third largest proven petroleum reserves and is the fourth largest exporter of petroleum. It is also the fourth largest exporter of natural gas. Canada is considered an "energy superpower" due to its abundant natural resources and a small population of 37 million inhabitants relative to its land area. According to the Corruption Perceptions Index, Canada is one of the least corrupt countries in the world, and is one of the world's top ten trading nations, with a highly globalized economy. Canada historically ranks above the U.S. and most western European nations on The Heritage Foundation's index of economic freedom, and experiencing a relatively low level of income disparity. The country's average household disposable income per capita is "well above" the OECD average. The Toronto Stock Exchange is the ninth-largest stock exchange in the world by market capitalization, listing over 1,500 companies with a combined market capitalization of over US$2 trillion. In 2018, Canadian trade in goods and services reached  trillion. Canada's exports totalled over  billion, while its imported goods were worth over  billion, of which approximately  billion originated from the United States,  billion from non-U.S. sources. In 2018, Canada had a trade deficit in goods of  billion and a trade deficit in services of  billion. Canada is unusual among developed countries in the importance of the primary sector, with the logging and oil industries being two of Canada's most important. Canada also has a sizable manufacturing sector, based in Central Canada, with the automobile industry and aircraft industry being especially important. With the world's longest coastline, Canada has the 8th largest commercial fishing and seafood industry in the world. Canada is one of the global leaders of the entertainment software industry. It is a member of the APEC, NAFTA, G7, G20, OECD and WTO. With the exception of Great Britain, and a few island nations in the Caribbean, Canada is the only major Parliamentary system in the Western Hemisphere. As a result, Canada has developed its own social and political institutions, distinct from most other countries in the world. Though the Canadian economy is closely integrated with the American economy, it has developed unique economic institutions. The Canadian economic system generally combines elements of private enterprise and public enterprise. Many aspects of public enterprise, most notably the development of an extensive social welfare system to redress social and economic inequities, were adopted after the end of World War II in 1945. Canada has a private to public (Crown) property ratio of 60:40 and one of the highest levels of economic freedom in the world. Today Canada closely resembles the U.S. in its market-oriented economic system and pattern of production. As of 2019, Canada has 56 companies in the Forbes Global 2000 list, ranking ninth just behind South Korea and ahead of Saudi Arabia. International trade makes up a large part of the Canadian economy, particularly of its natural resources. In 2009, agriculture, energy, forestry and mining exports accounted for about 58% of Canada's total exports. Machinery, equipment, automotive products and other manufactures accounted for a further 38% of exports in 2009. In 2009, exports accounted for about 30% of Canada's GDP. The United States is by far its largest trading partner, accounting for about 73% of exports and 63% of imports as of 2009. Canada's combined exports and imports ranked 8th among all nations in 2006. About 4% of Canadians are directly employed in primary resource fields, and they account for 6.2% of GDP. They are still paramount in many parts of the country. Many, if not most, towns in northern Canada, where agriculture is difficult, exist because of a nearby mine or source of timber. Canada is a world leader in the production of many natural resources such as gold, nickel, uranium, diamonds, lead, and in recent years, crude petroleum, which, with the world's second-largest oil reserves, is taking an increasingly prominent position in natural resources extraction. Several of Canada's largest companies are based in natural resource industries, such as Encana, Cameco, Goldcorp, and Barrick Gold. The vast majority of these products are exported, mainly to the United States. There are also many secondary and service industries that are directly linked to primary ones. For instance one of Canada's largest manufacturing industries is the pulp and paper sector, which is directly linked to the logging business. The reliance on natural resources has several effects on the Canadian economy and Canadian society. While manufacturing and service industries are easy to standardize, natural resources vary greatly by region. This ensures that differing economic structures developed in each region of Canada, contributing to Canada's strong regionalism. At the same time the vast majority of these resources are exported, integrating Canada closely into the international economy. Howlett and Ramesh argue that the inherent instability of such industries also contributes to greater government intervention in the economy, to reduce the social impact of market changes. Natural resource industries also raise important questions of sustainability. Despite many decades as a leading producer, there is little risk of depletion. Large discoveries continue to be made, such as the massive nickel find at Voisey's Bay. Moreover, the far north remains largely undeveloped as producers await higher prices or new technologies as many operations in this region are not yet cost effective. In recent decades Canadians have become less willing to accept the environmental destruction associated with exploiting natural resources. High wages and Aboriginal land claims have also curbed expansion. Instead many Canadian companies have focused their exploration, exploitation and expansion activities overseas where prices are lower and governments more amenable. Canadian companies are increasingly playing important roles in Latin America, Southeast Asia, and Africa. The depletion of renewable resources has raised concerns in recent years. After decades of escalating overutilization the cod fishery all but collapsed in the 1990s, and the Pacific salmon industry also suffered greatly. The logging industry, after many years of activism, has in recent years moved to a more sustainable model, or to other countries. The following table shows the main economic indicators in 1980–2018. Inflation under 2 % is in green. Export trade from Canada measured in US dollars. In 2018, Canada exported over US$450 billion. Import trade in 2017 measured in US dollars. Productivity measures are key indicators of economic performance and a key source of economic growth and competitiveness. The Organisation for Economic Co-operation and Development (OECD)'s "Compendium of Productivity Indicators", published annually, presents a broad overview of productivity levels and growth in member nations, highlighting key measurement issues. It analyses the role of "productivity as the main driver of economic growth and convergence" and the "contributions of labour, capital and MFP in driving economic growth". According to the definition above "MFP is often interpreted as the contribution to economic growth made by factors such as technical and organisational innovation" (OECD 2008,11). Measures of productivity include Gross Domestic Product (GDP)(OECD 2008,11) and multifactor productivity. Another productivity measure, used by the OECD, is the long-term trend in multifactor productivity (MFP) also known as total factor productivity (TFP). This indicator assesses an economy's "underlying productive capacity ('potential output'), itself an important measure of the growth possibilities of economies and of inflationary pressures". MFP measures the residual growth that cannot be explained by the rate of change in the services of labour, capital and intermediate outputs, and is often interpreted as the contribution to economic growth made by factors such as technical and organisational innovation. (OECD 2008,11) According to the OECD's annual economic survey of Canada in June 2012, Canada has experienced weak growth of multi-factor productivity (MFP) and has been declining further since 2002. One of the ways MFP growth is raised is by boosting innovation and Canada's innovation indicators such as business R&D and patenting rates were poor. Raising MFP growth is "needed to sustain rising living standards, especially as the population ages". The mandate of the central bank—the Bank of Canada is to conduct monetary policy that "preserves the value of money by keeping inflation low and stable". The Bank of Canada issues its bank rate announcement through its Monetary Policy Report which is released eight times a year. The Bank of Canada, a federal crown corporation, has the responsibility of Canada's monetary system. Under the inflation-targeting monetary policy that has been the cornerstone of Canada's monetary and fiscal policy since the early 1990s, the Bank of Canada sets an inflation target The inflation target was set at 2 per cent, which is the midpoint of an inflation range of 1 to 3 per cent. They established a set of inflation-reduction targets to keep inflation "low, stable and predictable" and to foster "confidence in the value of money", contribute to Canada's sustained growth, employment gains and improved standard of living. In a January 9, 2019 statement on the release of the Monetary Policy Report, Bank of Canada Governor Stephen S. Poloz summarized major events since the October report, such as "negative economic consequences" of the US-led trade war with China. In response to the ongoing trade war "bond yields have fallen, yield curves have flattened even more and stock markets have repriced significantly" in "global financial markets". In Canada, low oil prices will impact Canada's "macroeconomic outlook". Canada's housing sector is not stabilizing as quickly as anticipated. During the period that John Crow was Governor of the Bank of Canada—1987 to 1994— there was a worldwide recession and the bank rate rose to around 14% and unemployment topped 11%. Although since that time inflation-targeting has been adopted by "most advanced-world central banks", in 1991 it was innovative and Canada was an early adopter when the then-Finance Minister Michael Wilson approved the Bank of Canada's first inflation-targeting in the 1991 federal budget. The inflation target was set at 2 per cent. Inflation is measured by the total consumer price index (CPI). In 2011 the Government of Canada and the Bank of Canada extended Canada's inflation-control target to December 31, 2016. The Bank of Canada uses three unconventional instruments to achieve the inflation target: "a conditional statement on the future path of the policy rate", quantitative easing, and credit easing. As a result, interest rates and inflation eventually came down along with the value of the Canadian dollar. From 1991 to 2011 the inflation-targeting regime kept "price gains fairly reliable". Following the Financial crisis of 2007–08 the narrow focus of inflation-targeting as a means of providing stable growth in the Canadian economy was questioned. By 2011, the then-Bank of Canada Governor Mark Carney argued that the central bank's mandate would allow for a more flexible inflation-targeting in specific situations where he would consider taking longer "than the typical six to eight quarters to return inflation to 2 per cent". On July 15, 2015, the Bank of Canada announced that it was lowering its target for the overnight rate by another one-quarter percentage point, to 0.5 per cent "to try to stimulate an economy that appears to have failed to rebound meaningfully from the oil shock woes that dragged it into decline in the first quarter". According to the Bank of Canada announcement, in the first quarter of 2015, the total Consumer price index (CPI) inflation was about 1 per cent. This reflects "year-over-year price declines for consumer energy products". Core inflation in the first quarter of 2015 was about 2 per cent with an underlying trend in inflation at about 1.5 to 1.7 per cent. In response to the Bank of Canada's July 15, 2015 rate adjustment, Prime Minister Stephen Harper explained that the economy was "being dragged down by forces beyond Canadian borders such as global oil prices, the European debt crisis, and China's economic slowdown" which has made the global economy "fragile". The Chinese stock market had lost about US$3 trillion of wealth by July 2015 when panicked investors sold stocks, which created declines in the commodities markets, which in turn negatively impacted resource-producing countries like Canada. The Bank's main priority has been to keep inflation at a moderate level. As part of that strategy, interest rates were kept at a low level for almost seven years. Since September 2010, the key interest rate (overnight rate) was 0.5%. Since September 2010, the key interest rate (overnight rate) was 0.5%. In mid 2017, inflation remained below the Bank's 2% target, (at 1.6%) mostly because of reductions in the cost of energy, food and automobiles; as well, the economy was in a continuing spurt with a predicted GDP growth of 2.8 percent by year end. Early on 12 July 2017, the bank issued a statement that the benchmark rate would be increased to 0.75%. In 2017, the Canadian economy had the following relative weighting by industry, as percentage value of GDP: The service sector in Canada is vast and multifaceted, employing about three quarters of Canadians and accounting for 70% of GDP. The largest employer is the retail sector, employing almost 12% of Canadians. The retail industry is concentrated mainly in a small number of chain stores clustered together in shopping malls. In recent years, there has been an increase in the number of big-box stores, such as Wal-Mart (of the United States), Real Canadian Superstore, and Best Buy (of the United States). This has led to fewer workers in this sector and a migration of retail jobs to the suburbs. The second largest portion of the service sector is the business service and hire only a slightly smaller percentage of the population. This includes the financial services, real estate, and communications industries. This portion of the economy has been rapidly growing in recent years. It is largely concentrated in the major urban centres, especially Toronto, Montreal and Vancouver (see Banking in Canada). The education and health sectors are two of Canada's largest, but both are largely under the influence of the government. The health care industry has been quickly growing, and is the third largest in Canada. Its rapid growth has led to problems for governments who must find money to fund it. Canada has an important high tech industry, and a burgeoning film, television, and entertainment industry creating content for local and international consumption (see Media in Canada). Tourism is of ever increasing importance, with the vast majority of international visitors coming from the United States. Casino gaming is currently the fastest-growing component of the Canadian tourism industry, contributing $5 billion in profits for Canadian governments and employing 41,000 Canadians as of 2001. The general pattern of development for wealthy nations was a transition from a raw material production based economy to a manufacturing based one, and then to a service based economy. At its World War II peak in 1944, Canada's manufacturing sector accounted for 29% of GDP, declining to 10.37% in 2017. Canada has not suffered as greatly as most other rich, industrialized nations from the pains of the relative decline in the importance of manufacturing since the 1960s. A 2009 study by Statistics Canada also found that, while manufacturing declined as a relative percentage of GDP from 24.3% in the 1960s to 15.6% in 2005, manufacturing volumes between 1961 and 2005 kept pace with the overall growth in the volume index of GDP. Manufacturing in Canada was especially hit hard by the financial crisis of 2007–08. As of 2017, manufacturing accounts for 10% of Canada's GDP, a relative decline of more than 5% of GDP since 2005. Central Canada is home to branch plants to all the major American and Japanese automobile makers and many parts factories owned by Canadian firms such as Magna International and Linamar Corporation. Canada was the world's nineteenth-largest steel exporter in 2018. In year-to-date 2019 (through March), further referred to as YTD 2019, Canada exported 1.39 million metric tons of steel, a 22 percent decrease from 1.79 million metric tons in YTD 2018. Canada's exports represented about 1.5 percent of all steel exported globally in 2017, based on available data. By volume, Canada's 2018 steel exports represented just over one-tenth the volume of the world's largest exporter, China. In value terms, steel represented 1.4 percent of the total goods Canada exported in 2018. The growth in exports in the decade since 2009 has been 29%. The largest producers in 2018 were ArcelorMittal, Essar Steel Algoma, and the first of those alone accounted for roughly half of Canadian steel production through its two subsidiaries. The top two markets for Canada's exports were its NAFTA partners, and by themselves accounted for 92 percent of exports by volume. Canada sent 83 percent of its steel exports to the United States in YTD 2019. The gap between domestic demand and domestic production increased to -2.4 million metric tons, up from -0.2 million metric tons in YTD 2018. In YTD 2019, exports as a share of production decreased to 41.6 percent from 53 percent in YTD 2018. In 2017, heavy industry accounted for 10.2% of Canada's Greenhouse gas emissions. Canada has access to Cheap sources of energy because of its geography. This has enabled the creation of several important industries, such as the large aluminum industries in British Columbia and Quebec. Canada is also one of the world's highest per capita consumers of energy. The electricity sector in Canada has played a significant role in the economic and political life of the country since the late 19th century. The sector is organized along provincial and territorial lines. In a majority of provinces, large government-owned integrated public utilities play a leading role in the generation, transmission and distribution of electricity. Ontario and Alberta have created electricity markets in the last decade in order to increase investment and competition in this sector of the economy. In 2017, the electricity sector accounted for 10% of total national greenhouse gas emissions. Canada has substantial electricity trade with the neighbouring United States amounting to 72 TWh exports and 10 TWh imports in 2017. Hydroelectricity accounted for 59% of all electric generation in Canada in 2016, making Canada the world's second-largest producer of hydroelectricity after China. Since 1960, large hydroelectric projects, especially in Quebec, British Columbia, Manitoba and Newfoundland and Labrador, have significantly increased the country's generation capacity. The second-largest single source of power (15% of the total) is nuclear power, with several plants in Ontario generating more than half of that province's electricity, and one generator in New Brunswick. This makes Canada the world's sixth-largest producer of electricity generated by nuclear power, producing 95 TWh in 2017. Fossil fuels provide 19% of Canadian electric power, about half as coal (9% of the total) and the remainder a mix of natural gas and oil. Only five provinces use coal for electricity generation. Alberta, Saskatchewan, and Nova Scotia rely on coal for nearly half their generation while other provinces and territories use little or none. Alberta and Saskatchewan also use a substantial amount of natural gas. Remote communities including all of Nunavut and much of the Northwest Territories produce most of their electricity from diesel generators, at high economic and environmental cost. The federal government has set up initiatives to reduce dependence on diesel-fired electricity. Non-hydro renewables are a fast-growing portion of the total, at 7% in 2016. Canada possesses large oil and gas resources centred in Alberta and the Northern Territories, but also present in neighbouring British Columbia and Saskatchewan. The vast Athabasca oil sands give Canada the world's third largest reserves of oil after Saudi Arabia and Venezuela according to USGS. As such, the oil as gas industry represents 27% of Canada's total greenhouse gas emissions, an increase of 84% since 1990, mostly due to the development of the oil sands. Historically, an important issue in Canadian politics is the interplay between the oil and energy industry in Western Canada and the industrial heartland of Southern Ontario. Foreign investment in Western oil projects has fueled Canada's rising dollar. This has raised the price of Ontario's manufacturing exports and made them less competitive, a problem similar to the decline of the manufacturing sector in the Netherlands. The National Energy Policy of the early 1980s attempted to make Canada oil-sufficient and to ensure equal supply and price of oil in all parts of Canada, especially for the eastern manufacturing base. This policy proved deeply divisive as it forced Alberta to sell low-priced oil to eastern Canada. The policy was done away 5 years after it was first announced amid a collapse of oil prices in 1985. The new Prime Minister Brian Mulroney had campaigned against the policy in the 1984 Canadian federal election. One of the most controversial sections of the Canada–United States Free Trade Agreement of 1988 was a promise that Canada would never charge the United States more for energy than fellow Canadians. Canada is also one of the world's largest suppliers of agricultural products, particularly of wheat and other grains. Canada is a major exporter of agricultural products, to the United States and Asia. As with all other developed nations the proportion of the population and GDP devoted to agriculture fell dramatically over the 20th century. The agriculture and agri-food manufacturing sector created $49.0 billion to Canada's GDP in 2015, accounting for 2.6% of total GDP. This sector also accounts for 8.4% of Canada's Greenhouse gas emissions. As with other developed nations, the Canadian agriculture industry receives significant government subsidies and supports. However, Canada has been a strong supporter of reducing market influencing subsidies through the World Trade Organization. In 2000, Canada spent approximately CDN$4.6 billion on supports for the industry. Of this, $2.32 billion was classified under the WTO designation of "green box" support, meaning it did not directly influence the market, such as money for research or disaster relief. All but $848.2 million were subsidies worth less than 5% of the value of the crops they were provided for. Canada is negotiating bilateral FTAs with the following countries respectively trade blocs: Canada has been involved in negotiations to create the following regional trade blocks: Canada and the United States share a common trading relationship. Canada's job market continues to perform well along with the US, reaching a 30-year low in the unemployment rate in December 2006, following 14 consecutive years of employment growth. The United States is by far Canada's largest trading partner, with more than $1.7 billion CAD in trade per day in 2005. In 2009, 73% of Canada's exports went to the United States, and 63% of Canada's imports were from the United States. Trade with Canada makes up 23% of the United States' exports and 17% of its imports. By comparison, in 2005 this was more than U.S. trade with all countries in the European Union combined, and well over twice U.S. trade with all the countries of Latin America combined. Just the two-way trade that crosses the Ambassador Bridge between Michigan and Ontario equals all U.S. exports to Japan. Canada's importance to the United States is not just a border-state phenomenon: Canada is the leading export market for 35 of 50 U.S. states, and is the United States' largest foreign supplier of energy. Bilateral trade increased by 52% between 1989, when the U.S.–Canada Free Trade Agreement (FTA) went into effect, and 1994, when the North American Free Trade Agreement (NAFTA) superseded it. Trade has since increased by 40%. NAFTA continues the FTA's moves toward reducing trade barriers and establishing agreed-upon trade rules. It also resolves some long-standing bilateral irritants and liberalizes rules in several areas, including agriculture, services, energy, financial services, investment, and government procurement. NAFTA forms the largest trading area in the world, embracing the 405 million people of the three North American countries. The largest component of U.S.–Canada trade is in the commodity sector. The U.S. is Canada's largest agricultural export market, taking well over half of all Canadian food exports. Nearly two-thirds of Canada's forest products, including pulp and paper, are exported to the United States; 72% of Canada's total newsprint production also is exported to the U.S. At $73.6 billion in 2004, U.S.-Canada trade in energy is the largest U.S. energy trading relationship, with the overwhelming majority ($66.7 billion) being exports from Canada. The primary components of U.S. energy trade with Canada are petroleum, natural gas, and electricity. Canada is the United States' largest oil supplier and the fifth-largest energy producing country in the world. Canada provides about 16% of U.S. oil imports and 14% of total U.S. consumption of natural gas. The United States and Canada's national electricity grids are linked, and both countries share hydropower facilities on the western borders. While most of U.S.-Canada trade flows smoothly, there are occasionally bilateral trade disputes, particularly in the agricultural and cultural fields. Usually these issues are resolved through bilateral consultative forums or referral to World Trade Organization (WTO) or NAFTA dispute resolution. In May 1999, the U.S. and Canadian governments negotiated an agreement on magazines that provides increased access for the U.S. publishing industry to the Canadian market. The United States and Canada also have resolved several major issues involving fisheries. By common agreement, the two countries submitted a Gulf of Maine boundary dispute to the International Court of Justice in 1981; both accepted the court's 12 October 1984 ruling which demarcated the territorial sea boundary. A current issue between the United States and Canada is the ongoing softwood lumber dispute, as the U.S. alleges that Canada unfairly subsidizes its forestry industry. In 1990, the United States and Canada signed a bilateral Fisheries Enforcement Agreement, which has served to deter illegal fishing activity and reduce the risk of injury during fisheries enforcement incidents. The U.S. and Canada signed a Pacific Salmon Agreement in June 1999 that settled differences over implementation of the 1985 Pacific Salmon Treaty for the next decade. Canada and the United States signed an aviation agreement during Bill Clinton's visit to Canada in February 1995, and air traffic between the two countries has increased dramatically as a result. The two countries also share in operation of the St. Lawrence Seaway, connecting the Great Lakes to the Atlantic Ocean. The U.S remains Canada's largest foreign investor and the most popular destination for Canadian foreign investments.  In 2018, the stock of U.S. direct investment in Canada totaled $406 billion, while the stock of Canadian investment in the U.S totaled $595 billion, or 46% of the overall CDIA stock for 2018. This made Canada the second largest investing country in the U.S for 2018 US investments are primarily directed at Canada's mining and smelting industries, petroleum, chemicals, the manufacture of machinery and transportation equipment, and finance, while Canadian investment in the United States is concentrated in manufacturing, wholesale trade, real estate, petroleum, finance, and insurance and other services. The OECD reports the Central Government Debt as percentage of the GDP. In 2000 Canada's was 40.9 percent, in 2007 it was 25.2 percent, in 2008 it was 28.6 percent and by 2010 it was 36.1 percent. The OECD reports net financial liabilities measure used by the OECD, reports the net number at 25.2%, as of 2008, making Canada's total government debt burden as the lowest in the G8. The gross number was 68% in 2011. The CIA World Factbook, updated weekly, measures financial liabilities by using gross general government debt, as opposed to net federal debt used by the OECD and the Canadian federal government. Gross general government debt includes both "intragovernmental debt and the debt of public entities at the sub-national level". For example, the CIA measured Canada's public debt as 84.1% of GDP in 2012 and 87.4% of GDP in 2011 making it 22nd in the world. Household debt, the amount of money that all adults in the household owe financial institutions, includes consumer debt and mortgage loans. In March 2015, the International Monetary Fund reported that Canada's high household debt was one of two vulnerable domestic areas in Canada's economy; the second is its overheated housing market. According to Statistics Canada, total household credit as of July 2019 was CAD$2.2 trillion. According to Philip Cross of the Fraser Institute, in May 2015, while the Canadian household debt-to-income ratio is similar to that in the US, however lending standards in Canada are tighter than those in the United States to protect against high-risk borrowers taking out unsustainable debt. Since 1985, 63,755 deals in- and outbound Canada have been announced, with an overall value of US$3.7 billion. Almost 50% of the targets of Canadian companies (outbound deals) have a parent company in the US. Inbound deals are 82% percent from the US. Here is a list of the biggest deals in Canadian history:
https://en.wikipedia.org/wiki?curid=5195
Telecommunications in Canada Present-day telecommunications in Canada include telephone, radio, television, and internet usage. In the past, telecommunications included telegraphy available through Canadian Pacific and Canadian National. The history of telegraphy in Canada dates back to the Province of Canada. While the first telegraph company was the Toronto, Hamilton and Niagara Electro-Magnetic Telegraph Company, founded in 1846, it was the Montreal Telegraph Company, controlled by Hugh Allan and founded a year later, that dominated in Canada during the technology's early years. Following the 1852 Telegraph Act, Canada's first permanent transatlantic telegraph link was a submarine cable built in 1866 between Ireland and Newfoundland. Telegrams were sent through networks built by Canadian Pacific and Canadian National. In 1868 Montreal Telegraph began facing competition from the newly established Dominion Telegraph Company. 1880 saw the Great North Western Telegraph Company established to connect Ontario and Manitoba but within a year it was taken over by Western Union, leading briefly to that company's control of almost all telegraphy in Canada. In 1882, Canadian Pacific transmitted its first commercial telegram over telegraph lines they had erected alongside its tracks, breaking Western Union's monopoly. Great North Western Telegraph, facing bankruptcy, was taken over in 1915 by Canadian Northern. By the end of World War II, Canadians communicated by telephone, more than any other country. In 1967 the CP and CN networks were merged to form CNCP Telecommunications. As of 1951, approximately 7000 messages were sent daily from the United States to Canada. An agreement with Western Union required that U.S. company to route messages in a specified ratio of 3:1, with three telegraphic messages transmitted to Canadian National for every message transmitted to Canadian Pacific. The agreement was complicated by the fact that some Canadian destinations were served by only one of the two networks. Telephones - fixed lines: total subscriptions: 14,987,520 (July 2016 est.) Telephones - mobile cellular: 30.45 million (July 2016 est.) Telephone system: (2016) ITU prefixes: Letter combinations available for use in Canada as the first two letters of a television or radio station's call sign are CF, CG, CH, CI, CJ, CK, CY, CZ, VA, VB, VC, VD, VE, VF, VG, VO, VX, VY, XJ, XK, XL, XM, XN and XO. Only CF, CH, CI, CJ and CK are currently in common use, although four radio stations in St. John's, Newfoundland and Labrador retained call letters beginning with VO when Newfoundland joined Canadian Confederation in 1949. Stations owned by the Canadian Broadcasting Corporation use CB through a special agreement with the government of Chile. Some codes beginning with VE and VF are also in use to identify radio repeater transmitters. As of 2016, there were over 1,100 radio stations and audio services broadcasting in Canada. Of these, 711 are private commercial radio stations. These commercial stations account for over three quarters of radio stations in Canada. The remainder of the radio stations are a mix of public broadcasters, such as CBC Radio, as well as campus, community, and Aboriginal stations. As of 2016, 780 TV services were broadcasting in Canada. Cable and satellite television services are available throughout Canada. The largest cable providers are Rogers Cable, Shaw Cable, Vidéotron, Telus and Cogeco, while the two licensed satellite providers are Bell TV and Shaw Direct. Bell, Rogers, Telus, and Shaw are among the bigger ISPs in Canada. Depending on your location, Bell and Rogers would be the big internet service providers in Ontario, while Shaw and Telus are the main players competing in western provinces. The three major mobile network operators are Rogers Wireless (10.6 million subscribers), Bell Mobility (9.0 million) and Telus Mobility (8.8 million), which have a combined 91% of market share. Federally, telecommunications are overseen by the Canadian Radio-television and Telecommunications Commission ()–CRTC as outlined under the provisions of both the Telecommunications Act and Radiocommunication Acts. CRTC further works with Innovation, Science and Economic Development Canada (formerly Industry Canada) on various technical aspects including: allocating frequencies and call signs, managing the broadcast spectrum, and regulating other technical issues such as interference with electronics equipment. As Canada comprises a part of the North American Numbering Plan for area codes, the Canadian Numbering Administration Consortium within Canada is responsible for allocating and managing area codes in Canada.
https://en.wikipedia.org/wiki?curid=5196
Transportation in Canada Transportation in Canada, the world's second-largest country in total area, is dedicated to having an efficient, high-capacity multimodal transport spanning often vast distances between natural resource extraction sites, agricultural and urban areas. Canada's transportation system includes more than of roads, 10 major international airports, 300 smaller airports, of functioning railway track, and more than 300 commercial ports and harbours that provide access to the Pacific, Atlantic and Arctic oceans as well as the Great Lakes and the St. Lawrence Seaway. In 2005, the transportation sector made up 4.2% of Canada's GDP, compared to 3.7% for Canada's mining and oil and gas extraction industries. Transport Canada oversees and regulates most aspects of transportation within federal jurisdiction, including interprovincial transport. This primarily includes rail, air and maritime transportation. Transport Canada is under the direction of the federal government's Minister of Transport. The Transportation Safety Board of Canada is responsible for maintaining transportation safety in Canada by investigating accidents and making safety recommendations. There is a total of of roads in Canada, of which are paved, including of expressways (the third-longest collection in the world, behind the Interstate Highway System of the United States and the China's National Trunk Highway System). As of 2008, were unpaved. In 2009, there were 20,706,616 road vehicles registered in Canada, of which 96% were vehicles under , 2.4% were vehicles between and 1.6% were or greater. These vehicles travelled a total of 333.29 billion kilometres, of which 303.6 billion was for vehicles under 4.5 tonnes, 8.3 billion was for vehicles between 4.5 and 15 tonnes and 21.4 billion was for vehicles over 15 tonnes. For the 4.5- to 15-tonne trucks, 88.9% of vehicle-kilometres were intra-province trips, 4.9% were inter-province, 2.8% were between Canada and the US and 3.4% made outside of Canada. For the trucks over 15 tonnes, 59.1% of vehicle-kilometres were intra-province trips, 20% inter-province trips, 13.8% Canada-US trips and 7.1% trips made outside of Canada. Canada's vehicles consumed a total of of gasoline and of diesel. Trucking generated 35% of the total GDP from transport, compared to 25% for rail, water and air combined (the remainder being generated by the industry's transit, pipeline, scenic and support activities). Hence roads are the dominant means of passenger and freight transport in Canada. Roads and highways were managed by provincial and municipal authorities until construction of the Northwest Highway System (the Alaska Highway) and the Trans-Canada Highway project initiation. The Alaska Highway of 1942 was constructed during World War II for military purposes connecting Fort St. John, British Columbia with Fairbanks, Alaska. The transcontinental highway, a joint national and provincial expenditure, was begun in 1949 under the initiation of the Trans Canada Highway Act on December 10, 1949. The highway was completed in 1962 at a total expenditure of $1.4 billion. Internationally, Canada has road links with both the lower 48 US states and Alaska. The Ministry of Transportation maintains the road network in Ontario and also employs Ministry of Transport Enforcement Officers for the purpose of administering the Canada Transportation Act and related regulations. The Department of Transportation in New Brunswick performs a similar task in that province as well. Regulations enacted in regards to Canada highways are the 1971 Motor Vehicle Safety Act and the 1990 Highway Traffic Act The safety of Canada's roads is moderately good by international standards, and is improving both in terms of accidents per head of population and per billion vehicle kilometers. Air transportation made up 9% of the transport sector's GDP generation in 2005. Canada's largest air carrier and its flag carrier is Air Canada, which had 34 million customers in 2006 and, as of April 2010, operates 363 aircraft (including Air Canada Jazz). CHC Helicopter, the largest commercial helicopter operator in the world, is second with 142 aircraft and WestJet, a low-cost carrier formed in 1996, is third with 100 aircraft. Canada's airline industry saw significant change following the signing of the US-Canada open skies agreement in 1995, when the marketplace became less regulated and more competitive. The Canadian Transportation Agency employs transportation enforcement officers to maintain aircraft safety standards, and conduct periodic aircraft inspections, of all air carriers. The Canadian Air Transport Security Authority is charged with the responsibility for the security of air traffic within Canada. In 1994 the National Airports Policy was enacted Of over 1,800 registered Canadian aerodromes, certified airports, heliports, and floatplane bases, 26 are specially designated under Canada's National Airports System (NAS): these include all airports that handle 200,000 or more passengers each year, as well as the principal airport serving each federal, provincial, and territorial capital. However, since the introduction of the policy only one, Iqaluit Airport, has been added and no airports have been removed despite dropping below 200,000 passengers. The Government of Canada, with the exception of the three territorial capitals, retains ownership of these airports and leases them to local authorities. The next tier consists of 64 regional/local airports formerly owned by the federal government, most of which have now been transferred to other owners (most often to municipalities). Below is a table of Canada's ten biggest airports by passenger traffic in 2019. In 2007, Canada had a total of of freight and passenger railway, of which is electrified. While intercity passenger transportation by rail is now very limited, freight transport by rail remains common. Total revenues of rail services in 2006 was $10.4 billion, of which only 2.8% was from passenger services. In a year are usually earned about $11 billion, of which 3.2% is from passengers and the rest from freight. The Canadian National and Canadian Pacific Railway are Canada's two major freight railway companies, each having operations throughout North America. In 2007, 357 billion tonne-kilometres of freight were transported by rail, and 4.33 million passengers travelled 1.44 billion passenger-kilometres (an almost negligible amount compared to the 491 billion passenger-kilometres made in light road vehicles). 34,281 people were employed by the rail industry in the same year. Nationwide passenger services are provided by the federal crown corporation Via Rail. Three Canadian cities have commuter rail services: in the Montreal area by AMT, in the Toronto area by GO Transit, and in the Vancouver area by West Coast Express. Smaller railways such as Ontario Northland, Rocky Mountaineer, and Algoma Central also run passenger trains to remote rural areas. In Canada railways are served by standard gauge, , rails. See also track gauge in Canada. Canada has railway links with the lower 48 US States, but no connection with Alaska other than a train ferry service from Prince Rupert, British Columbia, although a line has been proposed. There are no other international rail connections. In 2005, of cargo was loaded and unloaded at Canadian ports. The Port of Vancouver is the busiest port in Canada, moving or 15% of Canada's total in domestic and international shipping in 2003. Transport Canada oversees most of the regulatory functions related to marine registration, safety of large vessel, and port pilotage duties. Many of Canada's port facilities are in the process of being divested from federal responsibility to other agencies or municipalities. Inland waterways comprise , including the St. Lawrence Seaway. Transport Canada enforces acts and regulations governing water transportation and safety. The St. Lawrence waterway was at one time the world's greatest inland water navigation system. The main route canals of Canada are those of the St. Lawrence River and the Great Lakes. The others are subsidiary canals. The National Harbours Board administered Halifax, Saint John, Chicoutimi, Trois-Rivières, Churchill, and Vancouver until 1983. At one time, over 300 harbours across Canada were supervised by the Department of Transport. A program of divestiture was implemented around the turn of the millennium, and as of 2014, 493 of the 549 sites identified for divestiture in 1995 have been sold or otherwise transferred, as indicated by a DoT list. The government maintains an active divestiture programme, and after divestiture Transport Canada oversees only 17 Canada Port Authorities for the 17 largest shipping ports. Canada's merchant marine comprised a "total" of 173 ships ( or over) or at the end of 2007. Pipelines are part of the energy extraction and transportation network of Canada and are used to transport natural gas, natural gas liquids, crude oil, synthetic crude and other petroleum based products. Canada has of pipeline for transportation of crude and refined oil, and for liquefied petroleum gas. Most Canadian cities have public transport, if only a bus system. Three Canadian cities have rapid transit systems, four have light rail systems, and three have commuter rail systems (see below). In 2016, 12.4% of Canadians used public transportation to get to work. This compares to 79.5% that got to work using a car (67.4% driving alone, 12.1% as part of a carpool), 5.5% that walked and 1.4% that rode a bike. Government organizations across Canada owned 17,852 buses of various types in 2016. Organizations in Ontario (38.8%) and Quebec (21.9%) accounted for just over three-fifths of the country's total bus fleet. Urban municipalities owned more than 85% of all buses. in 2016, diesel buses were the leading bus type in Canada (65.9%), followed by bio-diesel (18.1%) and hybrid (9.4%) buses. Electric, natural gas and other buses collectively accounted for the remaining 6.6%. There are three rapid transit systems operating in Canada: the Montreal Metro, the Toronto subway, and the Vancouver SkyTrain. There is also an airport circulator, the Link Train, at Toronto Pearson International Airport. It operates 24 hours a day, 7 days a week and is wheelchair-accessible. It is free of cost. There are light rail systems in four cities – the Calgary CTrain, the Edmonton LRT, the Ottawa O-Train, and Waterloo Region's Ion – while Toronto has an extensive streetcar system. The 2016 Canada's Core Public Infrastructure Survey from Statistics Canada found that all of Canada's 247 streetcars were owned by the City of Toronto. The vast majority (87.9%) of these streetcars were purchased from 1970 to 1999, while 12.1% were purchased in 2016. Reflecting the age of the streetcars, 88.0% were reported to be in very poor condition, while 12.0% were reported to be in good condition. Commuter trains serve the cities and surrounding areas of Montreal, Toronto and Vancouver: The standard history covers the French regime, fur traders, the canals, and early roads, and gives extensive attention to the railways. Prior to the arrival of European settlers, Aboriginal peoples in Canada walked. They also used canoes, kayaks, umiaks and Bull Boats, in addition to the snowshoe, toboggan and sled in winter. They had no wheeled vehicles, and no animals larger than dogs. Europeans adopted canoes as they pushed deeper into the continent's interior, and were thus able to travel via the waterways that fed from the St. Lawrence River and Hudson Bay. In the 19th century and early 20th century transportation relied on harnessing oxen to "Red River ox carts" or horse to wagon. Maritime transportation was via manual labour such as canoe or wind on sail. Water or land travel speeds was approximately . Settlement was along river routes. Agricultural commodities were perishable, and trade centres were within . Rural areas centred around villages, and they were approximately apart. The advent of steam railways and steamships connected resources and markets of vast distances in the late 19th century. Railways also connected city centres, in such a way that the traveller went by sleeper, railway hotel, to the cities. Crossing the country by train took four or five days, as it still does by car. People generally lived within of the downtown core thus the train could be used for inter-city travel and the tram for commuting. The advent of the interstate or Trans-Canada Highway in Canada in 1963 established ribbon development, truck stops, and industrial corridors along throughways. The Federal Department of Transport (established 2 November 1936) supervised railways, canals, harbours, marine and shipping, civil aviation, radio and meteorology. The Transportation Act of 1938 and the amended Railway Act, placed control and regulation of carriers in the hands of the Board of Transport commissioners for Canada. The Royal Commission on Transportation was formed 29 December 1948, to examine transportation services to all areas of Canada to eliminate economic or geographic disadvantages. The Commission also reviewed the Railway Act to provide uniform yet competitive freight-rates.
https://en.wikipedia.org/wiki?curid=5197
Canada–United States relations Canada–United States relations covers the bilateral relations between the adjacent countries of Canada and the United States. Relations between Canada and the United States historically have been extensive, given a shared border and ever-increasing close cultural, economical ties and similarities. The shared historical and cultural heritage has resulted in one of the most stable and mutually beneficial international relationships in the world. For both countries, the level of trade with the other is at the top of the annual combined import-export total. Tourism and migration between the two nations have increased rapport, but border security was heightened after the September 11 terrorist attacks on the United States in 2001. The U.S. is approximately 9.25 times larger in population and has the dominant cultural and economic influence. Starting with the American Revolution, when anti-American Loyalists fled to Canada, a vocal element in Canada has warned against US dominance or annexation. The War of 1812 saw invasions across the border. In 1815, the war ended with the border unchanged and demilitarized, as were the Great Lakes. The British ceased aiding Native American attacks on the United States, and the United States never again attempted to invade Canada. Apart from minor raids, it has remained peaceful. As Britain decided to disengage, fears of an American takeover played a role in Canadian Confederation (1867), and Canada's rejection of free trade (1911). Military collaboration was close during World War II and continued throughout the Cold War, bilaterally through NORAD and multilaterally through NATO. A very high volume of trade and migration continues between the two nations, as well as a heavy overlapping of popular and elite culture, a dynamic which has generated closer ties, especially after the signing of the Canada–United States Free Trade Agreement in 1988. The two nations have the world's longest shared border (), and also have significant interoperability within the defense sphere. Recent difficulties have included repeated trade disputes, environmental concerns, Canadian concern for the future of oil exports, and issues of illegal immigration and the threat of terrorism. Trade has continued to expand, especially following the 1988 FTA and North American Free Trade Agreement (NAFTA) in 1994 which has further merged the two economies. Co-operation on many fronts, such as the ease of the flow of goods, services, and people across borders are to be even more extended, as well as the establishment of joint border inspection agencies, relocation of U.S. food inspectors agents to Canadian plants and vice versa, greater sharing of intelligence, and harmonizing regulations on everything from food to manufactured goods, thus further increasing the American-Canadian assemblage. The foreign policies of the countries have been closely aligned since the Cold War. Canada has disagreed with American policies regarding the Vietnam War, the status of Cuba, the Iraq War, Missile Defense, and the War on Terror. A diplomatic debate has been underway in recent years on whether the Northwest Passage is in international waters or under Canadian sovereignty. Today there are close cultural ties, many similar and identical traits and according to Gallup's annual public opinion polls, Canada has consistently been Americans' favorite nation, with 96% of Americans viewing Canada favorably in 2012. As of spring 2013, 64% of Canadians had a favorable view of the U.S. and 81% expressed confidence in then-US President Obama to do the right thing in international matters. According to the same poll, 30% viewed the U.S. negatively. Also, according to a 2014 BBC World Service Poll, 86% of Americans view Canada's influence positively, with only 5% expressing a negative view. However, according to the same poll, 43% of Canadians view U.S. influence positively, with 52% expressing a negative view. In addition, according to Spring 2017 Global Attitudes Survey, 43% of Canadians view U.S. positively, while 51% hold a negative view. More recently, however, a poll in January 2018 showed Canadians' approval of U.S. leadership dropped by over 40 percentage points under President Donald Trump, in line with the view of residents of many other U.S. allied and neutral countries. Leaders of Canada and the United States from 1950 Before the British conquest of French Canada in 1760, there had been a series of wars between the British and the French which were fought out in the colonies as well as in Europe and the high seas. In general, the British heavily relied on American colonial militia units, while the French heavily relied on their First Nation allies. The Iroquois Nation were important allies of the British. Much of the fighting involved ambushes and small-scale warfare in the villages along the border between New England and Quebec. The New England colonies had a much larger population than Quebec, so major invasions came from south to north. The First Nation allies, only loosely controlled by the French, repeatedly raided New England villages to kidnap women and children, and torture and kill the men. Those who survived were brought up as Francophone Catholics. The tension along the border was exacerbated by religion, the French Catholics and English Protestants had a deep mutual distrust. There was a naval dimension as well, involving privateers attacking enemy merchant ships. England seized Quebec from 1629 to 1632, and Acadia in 1613 and again from 1654 to 1670; These territories were returned to France by the peace treaties. The major wars were (to use American names), King William's War (1689–1697); Queen Anne's War (1702–1713); King George's War (1744–1748), and the French and Indian War (1755–1763). In Canada, as in Europe, this era is known as the Seven Years' War. New England soldiers and sailors were critical to the successful British campaign to capture the French fortress of Louisbourg in 1745, and (after it had been returned by treaty) to capture it again in 1758. From the 1750s to the 21st century, there has been extensive mingling of the Canadian and American populations, with large movements in both directions. New England Yankees settled large parts of Nova Scotia before 1775, and were neutral during the American Revolution. At the end of the American Revolution, about 75,000 United Empire Loyalists moved out of the new United States to Nova Scotia, New Brunswick, and the lands of Quebec, east and south of Montreal. From 1790 to 1812 many farmers moved from New York and New England into Upper Canada (mostly to Niagara, and the north shore of Lake Ontario). In the mid and late 19th century gold rushes attracted American prospectors, mostly to British Columbia after the Cariboo Gold Rush, Fraser Canyon Gold Rush, and later to the Yukon Territory. In the early 20th century, the opening of land blocks in the Prairie Provinces attracted many farmers from the American Midwest. Many Mennonites immigrated from Pennsylvania and formed their own colonies. In the 1890s some Mormons went north to form communities in Alberta after The Church of Jesus Christ of Latter-day Saints rejected plural marriage. The 1960s saw the arrival of about 50,000 draft-dodgers who opposed the Vietnam War. In the late 19th and early 20th centuries, about 900,000 French Canadians moved to the U.S., with 395,000 residents there in 1900. Two-thirds went to mill towns in New England, where they formed distinctive ethnic communities. By the late 20th century, they had abandoned the French language, but most kept the Catholic religion. About twice as many English Canadians came to the U.S., but they did not form distinctive ethnic settlements. Canada was a way-station through which immigrants from other lands stopped for a while, ultimately heading to the U.S. In 1851–1951, 7.1 million people arrived in Canada (mostly from Continental Europe), and 6.6 million left Canada, most of them to the U.S. At the outset of the American Revolutionary War, the American revolutionaries hoped the French Canadians in Quebec and the Colonists in Nova Scotia would join their rebellion and they were pre-approved for joining the United States in the Articles of Confederation. When Canada was invaded, thousands joined the American cause and formed regiments that fought during the war; however most remained neutral and some joined the British effort. Britain advised the French Canadians that the British Empire already enshrined their rights in the Quebec Act, which the American colonies had viewed as one of the Intolerable Acts. The American invasion was a fiasco and Britain tightened its grip on its northern possessions; in 1777, a major British invasion into New York led to the surrender of the entire British army at Saratoga, and led France to enter the war as an ally of the U.S. The French Canadians largely ignored France's appeals for solidarity. After the war Canada became a refuge for about 75,000 Loyalists who either wanted to leave the U.S., or were compelled by Patriot reprisals to do so. Among the original Loyalists there were 3,500 free African Americans. Most went to Nova Scotia and in 1792, 1200 migrated to Sierra Leone. About 2000 black slaves were brought in by Loyalist owners; they remained slaves in Canada until the Empire abolished slavery in 1833. Before 1860, about 30,000–40,000 black people entered Canada; many were already free and others were escaped slaves who came through the Underground Railroad. The Treaty of Paris, which ended the war, called for British forces to vacate all their forts south of the Great Lakes border. Britain refused to do so, citing failure of the United States to provide financial restitution for Loyalists who had lost property in the war. The Jay Treaty in 1795 with Great Britain resolved that lingering issue and the British departed the forts. Thomas Jefferson saw the nearby British presence as a threat to the United States, and so he opposed the Jay Treaty, and it became one of the major political issues in the United States at the time. Thousands of Americans immigrated to Upper Canada (Ontario) from 1785 to 1812 to obtain cheaper land and better tax rates prevalent in that province; despite expectations that they would be loyal to the U.S. if a war broke out, in the event they were largely non-political. Tensions mounted again after 1805, erupting into the War of 1812, when the United States declared war on Britain. The Americans were angered by British harassment of U.S. ships on the high seas and seizure of 6,000 sailors from American ships, severe restrictions against neutral American trade with France, and British support for hostile Native American tribes in Ohio and territories the U.S. had gained in 1783. American "honor" was an implicit issue. While the Americans could not hope to defeat the Royal Navy and control the seas, they could call on an army much larger than the British garrison in Canada, and so a land invasion of Canada was proposed as the most advantageous means of attacking the British Empire. Americans on the western frontier also hoped an invasion would bring an end to British support of Native American resistance to American expansion, typified by Tecumseh's coalition of tribes. Americans may also have wanted to acquire Canada. Once war broke out, the American strategy was to seize Canada. There was some hope that settlers in western Canada—most of them recent immigrants from the U.S.—would welcome the chance to overthrow their British rulers. However, the American invasions were defeated primarily by British regulars with support from Native Americans and Upper Canada militia. Aided by the large Royal Navy, a series of British raids on the American coast were highly successful, culminating with an attack on Washington that resulted in the British burning of the White House, the Capitol, and other public buildings. However, the later battles of Baltimore, Plattsburg, and New Orleans all ended in defeat for the British. At the end of the war, Britain's American Indian allies had largely been defeated, and the Americans controlled a strip of Western Ontario centered on Fort Malden. However, Britain held much of Maine, and, with the support of their remaining American Indian allies, huge areas of the Old Northwest, including Wisconsin and much of Michigan and Illinois. With the surrender of Napoleon in 1814, Britain ended naval policies that angered Americans; with the defeat of the Indian tribes the threat to American expansion was ended. The upshot was both the United States and Canada asserted their sovereignty, Canada remained under British rule, and London and Washington had nothing more to fight over. The war was ended by the Treaty of Ghent, which took effect in February 1815. A series of postwar agreements further stabilized peaceful relations along the Canadian-US border. Canada reduced American immigration for fear of undue American influence, and built up the Anglican Church of Canada as a counterweight to the largely American Methodist and Baptist churches. In later years, Anglophone Canadians, especially in Ontario, viewed the War of 1812 as a heroic and successful resistance against invasion and as a victory that defined them as a people. The myth that the Canadian militia had defeated the invasion almost single-handed, known logically as the "militia myth", became highly prevalent after the war, having been propounded by John Strachan, Anglican Bishop of York. Meanwhile, the United States celebrated victory in its "Second War of Independence," and war heroes such as Andrew Jackson and William Henry Harrison headed to the White House. In the aftermath of the War of 1812, pro-British conservatives led by Anglican Bishop John Strachan took control in Ontario ("Upper Canada"), and promoted the Anglican religion as opposed to the more republican Methodist and Baptist churches. A small interlocking elite, known as the Family Compact took full political control. Democracy, as practiced in the US, was ridiculed. The policies had the desired effect of deterring immigration from United States. Revolts in favor of democracy in Ontario and Quebec ("Lower Canada") in 1837 were suppressed; many of the leaders fled to the US. The American policy was to largely ignore the rebellions, and indeed ignore Canada generally in favor of westward expansion of the American Frontier. The British Empire and Canada were neutral in the American Civil War, and about 40,000 Canadian citizens volunteered for the Union Army—many already lived in the U.S., and a few for the Confederate Army. However, hundreds of Americans who were called up in the draft fled to Canada. In 1864 the Confederate government tried to use Canada as a base to attack American border towns. They raided the town St. Albans Vermont On Oct 19, 1864, killing an American citizen and robbing three banks of over $200,000 dollars. The three Confederates escaped to Canada where they were arrested, but then released. Many Americans suspected – falsely – that the Canadian government knew of the raid ahead of time. There was widespread anger when the raiders were released by a local court in Canada. The American Secretary of State William H. Seward let the British government know, "it is impossible to consider those proceedings as either legal, just or friendly towards the United States." The Fenian raids were small-scale attacks carried out by the Fenian Brotherhood, an Irish Republican organization based among Irish Catholics in the United States. Targets were [British Army forts, customs posts and other targets near the border. The raids were small, unsuccessful episodes in 1866, and again from 1870 to 1871. The goal was to bring pressure on Great Britain to withdraw from Ireland. None of these raids achieved their aims and all were quickly defeated by local Canadian forces. At the end of the Civil War in 1865, Americans were angry at British support for the Confederacy. One result was official toleration of Fenian efforts to use the U.S. as a base to attack Canada. Americans were angry at the wartime British role. Some leaders demanded for a huge payment, on the premise that British involvement had lengthened the war. Senator Charles Sumner, the chairman of the Senate Foreign Relations Committee, originally wanted to ask for $2 billion, or alternatively the ceding of all of Canada to the United States. When American Secretary of State William H. Seward negotiated the Alaska Purchase with Russia in 1867, he intended it as the first step in a comprehensive plan to gain control of the entire northwest Pacific Coast. Seward was a firm believer in Manifest Destiny, primarily for its commercial advantages to the U.S. Seward expected British Columbia to seek annexation to the U.S. and thought Britain might accept this in exchange for the "Alabama" claims. Soon other elements endorsed annexation, Their plan was to annex British Columbia, Red River Colony (Manitoba), and Nova Scotia, in exchange for the dropping the damage claims. The idea reached a peak in the spring and summer of 1870, with American expansionists, Canadian separatists, and Pro-American Englishmen seemingly combining forces. The plan was dropped for multiple reasons. London continued to stall, American commercial and financial groups pressed Washington for a quick settlement of the dispute on a cash basis, growing Canadian nationalist sentiment in British Columbia called for staying inside the British Empire, Congress became preoccupied with Reconstruction, and most Americans showed little interest in territorial expansion. The "Alabama Claims" dispute went to international arbitration. In one of the first major cases of arbitration, the tribunal in 1872 supported the American claims and ordered Britain to pay $15.5 million. Britain paid and the episode ended in peaceful relations. Canada became a self-governing dominion in 1867 in internal affairs while Britain controlled diplomacy and defence policy. Prior to Confederation, there was an Oregon boundary dispute in which the Americans claimed the 54th degree latitude. That issue was resolved by splitting the disputed territory; the northern half became British Columbia, and the southern half the states of Washington and Oregon. Strained relations with America continued, however, due to a series of small-scale armed incursions named the Fenian raids by Irish-American Civil War veterans across the border from 1866 to 1871 in an attempt to trade Canada for Irish independence. The American government, angry at Canadian tolerance of Confederate raiders during the American Civil War, moved very slowly to disarm the Fenians. The British government, in charge of diplomatic relations, protested cautiously, as Anglo-American relations were tense. Much of the tension was relieved as the Fenians faded away and in 1872 by the settlement of the Alabama Claims, when Britain paid the U.S. $15.5 million for war losses caused by warships built in Britain and sold to the Confederacy. Disputes over ocean boundaries on Georges Bank and over fishing, whaling, and sealing rights in the Pacific were settled by international arbitration, setting an important precedent. After 1850, the pace of industrialization and urbanization was much faster in the United States, drawing a wide range of immigrants from the North. By 1870, 1/6 of all the people born in Canada had moved to the United States, with the highest concentrations in New England, which was the destination of Francophone emigrants from Quebec and Anglophone emigrants from the Maritimes. It was common for people to move back and forth across the border, such as seasonal lumberjacks, entrepreneurs looking for larger markets, and families looking for jobs in the textile mills that paid much higher wages than in Canada. The southward migration slacked off after 1890, as Canadian industry began a growth spurt. By then, the American frontier was closing, and thousands of farmers looking for fresh land moved from the United States north into the Prairie Provinces. The net result of the flows were that in 1901 there were 128,000 American-born residents in Canada (3.5% of the Canadian population) and 1.18 million Canadian-born residents in the United States (1.6% of the U.S. population). A short-lived controversy was the Alaska boundary dispute, settled in favor of the United States in 1903. The issue was unimportant until the Klondike Gold Rush brought tens of thousands of men to Canada's Yukon, and they had to arrive through American ports. Canada needed its port and claimed that it had a legal right to a port near the present American town of Haines, Alaska. It would provide an all-Canadian route to the rich goldfields. The dispute was settled by arbitration, and the British delegate voted with the Americans—to the astonishment and disgust of Canadians who suddenly realized that Britain considered its relations with the United States paramount compared to those with Canada. The arbitration validated the status quo, but made Canada angry at London. 1907 saw a minor controversy over USS "Nashville" sailing into the Great Lakes via Canada without Canadian permission. To head off future embarrassments, in 1909 the two sides signed the International Boundary Waters Treaty and the International Joint Commission was established to manage the Great Lakes and keep them disarmed. It was amended in World War II to allow the building and training of warships. Anti-Americanism reached a shrill peak in 1911 in Canada. The Liberal government in 1911 negotiated a Reciprocity treaty with the U.S. that would lower trade barriers. Canadian manufacturing interests were alarmed that free trade would allow the bigger and more efficient American factories to take their markets. The Conservatives made it a central campaign issue in the 1911 election, warning that it would be a "sell out" to the United States with economic annexation a special danger. The Conservative slogan was "No truck or trade with the Yankees", as they appealed to Canadian nationalism and nostalgia for the British Empire to win a major victory. Canada demanded and received permission from London to send its own delegation to the Versailles Peace Talks in 1919, with the proviso that it sign the treaty under the British Empire. Canada subsequently took responsibility for its own foreign and military affairs in the 1920s. Its first ambassador to the United States, Vincent Massey, was named in 1927. The United States first ambassador to Canada was William Phillips. Canada became an active member of the British Commonwealth, the League of Nations, and the World Court, none of which included the U.S. In July 1923, as part of his Pacific Northwest tour and a week before his death, US President Warren Harding visited Vancouver, making him the first head of state of the United States to visit Canada. The then Premier of British Columbia, John Oliver, and then mayor of Vancouver, Charles Tisdall, hosted a lunch in his honor at the Hotel Vancouver. Over 50,000 people heard Harding speak in Stanley Park. A monument to Harding designed by Charles Marega was unveiled in Stanley Park in 1925. Relations with the United States were cordial until 1930, when Canada vehemently protested the new Smoot–Hawley Tariff Act by which the U.S. raised tariffs (taxes) on products imported from Canada. Canada retaliated with higher tariffs of its own against American products, and moved toward more trade within the British Commonwealth. U.S.–Canadian trade fell 75% as the Great Depression dragged both countries down. Down to the 1920s the war and naval departments of both nations designed hypothetical war game scenarios on paper with the other as an enemy. These were routine training exercises; the departments were never told to get ready for a real war. In 1921, Canada developed Defence Scheme No. 1 for an attack on American cities and for forestalling invasion by the United States until British reinforcements arrived. Through the later 1920s and 1930s, the United States Army War College developed a plan for a war with the British Empire waged largely on North American territory, in War Plan Red. Herbert Hoover meeting in 1927 with British Ambassador Sir Esme Howard agreed on the "absurdity of contemplating the possibility of war between the United States and the British Empire." In 1938, as the roots of World War II were set in motion, U.S. President Franklin Roosevelt gave a public speech at Queen's University in Kingston, Ontario, declaring that the United States would not sit idly by if another power tried to dominate Canada. Diplomats saw it as a clear warning to Germany not to attack Canada. The two nations cooperated closely in World War II, as both nations saw new levels of prosperity and a determination to defeat the Axis powers. Prime Minister William Lyon Mackenzie King and President Franklin D. Roosevelt were determined not to repeat the mistakes of their predecessors. They met in August 1940 at Ogdensburg, issuing a declaration calling for close cooperation, and formed the Permanent Joint Board on Defense (PJBD). King sought to raise Canada's international visibility by hosting the August 1943 Quadrant conference in Quebec on military and political strategy; he was a gracious host but was kept out of the important meetings by Winston Churchill and Roosevelt. Canada allowed the construction of the Alaska Highway and participated in the building of the atomic bomb. 49,000 Americans joined the RCAF (Canadian) or RAF (British) air forces through the Clayton Knight Committee, which had Roosevelt's permission to recruit in the U.S. in 1940–42. American attempts in the mid-1930s to integrate British Columbia into a united West Coast military command had aroused Canadian opposition. Fearing a Japanese invasion of Canada's vulnerable British Columbia Coast, American officials urged the creation of a united military command for an eastern Pacific Ocean theater of war. Canadian leaders feared American imperialism and the loss of autonomy more than a Japanese invasion. In 1941, Canadians successfully argued within the PJBD for mutual cooperation rather than unified command for the West Coast. The United States built large military bases in Newfoundland during World War II. At the time it was a British crown colony, having lost dominion status. The American spending ended the depression and brought new prosperity; Newfoundland's business community sought closer ties with the United States as expressed by the Economic Union Party. Ottawa took notice and wanted Newfoundland to join Canada, which it did after hotly contested referenda. There was little demand in the United States for the acquisition of Newfoundland, so the United States did not protest the British decision not to allow an American option on the Newfoundland referendum. Prime Minister William Lyon Mackenzie King, working closely with his Foreign Minister Louis St. Laurent, handled foreign relations 1945–48 in cautious fashion. Canada donated money to the United Kingdom to help it rebuild; was elected to the UN Security Council; and helped design NATO. However, Mackenzie King rejected free trade with the United States, and decided not to play a role in the Berlin airlift. Canada had been actively involved in the League of Nations, primarily because it could act separately from Britain. It played a modest role in the postwar formation of the United Nations, as well as the International Monetary Fund. It played a somewhat larger role in 1947 in designing the General Agreement on Tariffs and Trade. After the mid-20th century onwards, Canada and the United States became extremely close partners. Canada was a close ally of the United States during the Cold War. While Canada openly accepted draft evaders and later deserters from the United States, there was never serious international dispute due to Canada's actions, while Sweden's acceptance was heavily criticized by the United States. The issue of accepting American exiles became a local political debate in Canada that focused on Canada's sovereignty in its immigration law. The United States did not become involved because American politicians viewed Canada as geographically close ally not worth disturbing. The United States had become Canada's largest market, and after the war the Canadian economy became dependent on smooth trade flows with the United States so much that in 1971 when the United States enacted the "Nixon Shock" economic policies (including a 10% tariff on all imports) it put the Canadian government into a panic. Washington refused to exempt Canada from its 1971 New Economic Policy, so Trudeau saw a solution in closer economic ties with Europe. Trudeau proposed a "Third Option" policy of diversifying Canada's trade and downgrading the importance of the American market. In a 1972 speech in Ottawa, Nixon declared the "special relationship" between Canada and the United States dead. Relations deteriorated on many points in the Nixon years (1969–74), including trade disputes, defense agreements, energy, fishing, the environment, cultural imperialism, and foreign policy. They changed for the better when Trudeau and President Jimmy Carter (1977-1981) found a better rapport. The late 1970s saw a more sympathetic American attitude toward Canadian political and economic needs, the pardoning of draft evaders who had moved to Canada, and the passing of old such as the Watergate scandal and the Vietnam War. Canada more than ever welcomed American investments during "the stagflation" that hurt both nations. The main issues in Canada–U.S. relations in the 1990s focused on the NAFTA agreement, which was signed in 1994. It created a common market that by 2014 was worth $19 trillion, encompassed 470 million people, and had created millions of jobs. Wilson says, "Few dispute that NAFTA has produced large and measurable gains for Canadian consumers, workers, and businesses." However, he adds, "NAFTA has fallen well short of expectations." Since the arrival of the Loyalists as refugees from the American Revolution in the 1780s, historians have identified a constant theme of Canadian fear of the United States and of "Americanization" or a cultural takeover. In the War of 1812, for example, the enthusiastic response by French militia to defend Lower Canada reflected, according to Heidler and Heidler (2004), "the fear of Americanization." Scholars have traced this attitude over time in Ontario and Quebec. Canadian intellectuals who wrote about the U.S. in the first half of the 20th century identified America as the world center of modernity, and deplored it. Anti-American Canadians (who admired the British Empire) explained that Canada had narrowly escaped American conquest with its rejection of tradition, its worship of "progress" and technology, and its mass culture; they explained that Canada was much better because of its commitment to orderly government and societal harmony. There were a few ardent defenders of the nation to the south, notably liberal and socialist intellectuals such as F. R. Scott and Jean-Charles Harvey (1891–1967). Looking at television, Collins (1990) finds that it is in Anglophone Canada that fear of cultural Americanization is most powerful, for there the attractions of the U.S. are strongest. Meren (2009) argues that after 1945, the emergence of Quebec nationalism and the desire to preserve French-Canadian cultural heritage led to growing anxiety regarding American cultural imperialism and Americanization. In 2006 surveys showed that 60 percent of Québécois had a fear of Americanization, while other surveys showed they preferred their current situation to that of the Americans in the realms of health care, quality of life as seniors, environmental quality, poverty, educational system, racism and standard of living. While agreeing that job opportunities are greater in America, 89 percent disagreed with the notion that they would rather be in the United States, and they were more likely to feel closer to English Canadians than to Americans. However, there is evidence that the elites and Quebec are much less fearful of Americanization, and much more open to economic integration than the general public. The history has been traced in detail by a leading Canadian historian J.L. Granatstein in "Yankee Go Home: Canadians and Anti-Americanism" (1997). Current studies report the phenomenon persists. Two scholars report, "Anti-Americanism is alive and well in Canada today, strengthened by, among other things, disputes related to NAFTA, American involvement in the Middle East, and the ever-increasing Americanization of Canadian culture." Jamie Glazov writes, "More than anything else, Diefenbaker became the tragic victim of Canadian anti-Americanism, a sentiment the prime minister had fully embraced by 1962. [He was] unable to imagine himself (or his foreign policy) without enemies." Historian J. M. Bumsted says, "In its most extreme form, Canadian suspicion of the United States has led to outbreaks of overt anti-Americanism, usually spilling over against American residents in Canada." John R. Wennersten writes, "But at the heart of Canadian anti-Americanism lies a cultural bitterness that takes an American expatriate unaware. Canadians fear the American media's influence on their culture and talk critically about how Americans are exporting a culture of violence in its television programming and movies." However Kim Nossal points out that the Canadian variety is much milder than anti-Americanism in some other countries. By contrast Americans show very little knowledge or interest one way or the other regarding Canadian affairs. Canadian historian Frank Underhill, quoting Canadian playwright Merrill Denison summed it up: "Americans are benevolently ignorant about Canada, whereas Canadians are malevolently informed about the United States." The executive of each country is represented differently. The President of the United States serves as both the head of state and head of government, and his "administration" is the executive, while the Prime Minister of Canada is head of government only, and his or her "government" or "ministry" directs the executive. In 1940, W.L. Mackenzie King and Franklin D. Roosevelt signed a defense pact, known as the Ogdensburg Agreement. Diefenbaker and Kennedy did not get along well personally. This was evident in Diefenbaker's response to the Cuban Missile Crisis, where he did not support the United States. However, Diefenbaker's Minister of Defence went behind Diefenbaker's back and did set Canada's military to high alert in order to try and appease Kennedy. In 1965 Lester B. Pearson gave a speech in Philadelphia criticizing the US involvement in the Vietnam War. This infuriated Lyndon B. Johnson, who gave him a harsh talk, saying "You don't come here and piss on my rug". Relations between Brian Mulroney and Ronald Reagan were famously close. This relationship resulted in negotiations for the Canada–United States Free Trade Agreement, and the U.S.–Canada Air Quality Agreement to reduce acid-rain-causing emissions, both major policy goals of Mulroney, that would be finalized under the presidency of George H. W. Bush. Although Jean Chrétien was wary of appearing too close to President Bill Clinton, both men had a passion for golf. During a news conference with Prime Minister Chrétien in April 1997, President Clinton quipped "I don't know if any two world leaders have played golf together more than we have, but we meant to break a record". Their governments had many small trade quarrels over the Canadian content of American magazines, softwood lumber, and so on, but on the whole were quite friendly. Both leaders had run on reforming or abolishing NAFTA, but the agreement went ahead with the addition of environmental and labor side agreements. Crucially, the Clinton administration lent rhetorical support to Canadian unity during the 1995 referendum in Quebec on separation from Canada. Relations between Chrétien and George W. Bush were strained throughout their overlapping times in office. After the September 11 attacks terror attacks, Jean Chrétien publicly mused that U.S. foreign policy might be part of the "root causes" of terrorism. Some Americans criticized his "smug moralism", and Chrétien's public refusal to support the 2003 Iraq war was met with negative responses in the United States, especially among conservatives. Stephen Harper and George W. Bush were thought to share warm personal relations and also close ties between their administrations. Because Bush was so unpopular among liberals in Canada (particularly in the media), this was underplayed by the Harper government. Shortly after being congratulated by Bush for his victory in February 2006, Harper rebuked U.S. ambassador to Canada David Wilkins for criticizing the Conservatives' plans to assert Canada's sovereignty over the Arctic Ocean waters with military force. President Barack Obama's first international trip was to Canada on February 19, 2009, thereby sending a strong message of peace and cooperation. With the exception of Canadian lobbying against "Buy American" provisions in the U.S. stimulus package, relations between the two administrations were smooth. They also held friendly bets on hockey games during the Winter Olympic season. In the 2010 Winter Olympics hosted by Canada in Vancouver, Canada defeated the US in both gold medal matches, entitling Stephen Harper to receive a case of Molson Canadian beer from Barack Obama; in reverse, if Canada had lost, Harper would have provided a case of Yuengling beer to Obama. During the 2014 Winter Olympics, alongside U.S. Secretary of State John Kerry & Minister of Foreign Affairs John Baird, Stephen Harper was given a case of Samuel Adams beer by Obama for the Canadian gold medal victory over the US in women's hockey, and the semi-final victory over the US in men's hockey. On February 4, 2011, Harper and Obama issued a "Declaration on a Shared Vision for Perimeter Security and Economic Competitiveness" and announced the creation of the Canada–United States Regulatory Cooperation Council (RCC) "to increase regulatory transparency and coordination between the two countries." Health Canada and the United States Food and Drug Administration (FDA) under the RCC mandate, undertook the "first of its kind" initiative by selecting "as its first area of alignment common cold indications for certain over-the-counter antihistamine ingredients (GC 2013-01-10)." On December 7, 2011, Harper flew to Washington, met with Obama and signed an agreement to implement the joint action plans that had been developed since the initial meeting in February. The plans called on both countries to spend more on border infrastructure, share more information on people who cross the border, and acknowledge more of each other's safety and security inspection on third-country traffic. An editorial in "The Globe and Mail" praised the agreement for giving Canada the ability to track whether failed refugee claimants have left Canada via the U.S. and for eliminating "duplicated baggage screenings on connecting flights". The agreement is not a legally binding treaty, and relies on the political will and ability of the executives of both governments to implement the terms of the agreement. These types of executive agreements are routine—on both sides of the Canada–U.S. border. President Barack Obama and Prime Minister Justin Trudeau first met formally at the APEC summit meeting in Manila, Philippines in November 2015, nearly a week after the latter was sworn into the office. Both leaders expressed eagerness for increased cooperation and coordination between the two countries during the course of Trudeau's government with Trudeau promising an "enhanced Canada–U.S. partnership". On November 6, 2015, Obama announced the U.S. State Department's rejection of the proposed Keystone XL pipeline, the fourth phase of the Keystone oil pipeline system running between Canada and the United States, to which Trudeau expressed disappointment but said that the rejection would not damage Canada–U.S. relations and would instead provide a "fresh start" to strengthening ties through cooperation and coordination, saying that "the Canada–U.S. relationship is much bigger than any one project." Obama has since praised Trudeau's efforts to prioritize the reduction of climate change, calling it "extraordinarily helpful" to establish a worldwide consensus on addressing the issue. Although Trudeau has told Obama his plans to withdraw Canada's McDonnell Douglas CF-18 Hornet jets assisting in the American-led intervention against ISIL, Trudeau said that Canada will still "do more than its part" in combating the terrorist group by increasing the number of Canadian special forces members training and fighting on ground in Iraq and Syria. Trudeau visited the White House for an official visit and state dinner on March 10, 2016. Trudeau and Obama were reported to have shared warm personal relations during the visit, making humorous remarks about which country was better at hockey and which country had better beer. Obama complimented Trudeau's 2015 election campaign for its "message of hope and change" and "positive and optimistic vision". Obama and Trudeau also held "productive" discussions on climate change and relations between the two countries, and Trudeau invited Obama to speak in the Canadian parliament in Ottawa later in the year. Following the victory of Donald Trump in the 2016 U.S. presidential election, Trudeau congratulated him and invited him to visit Canada at the "earliest opportunity." Prime Minister Trudeau and President Trump formally met for the first time at the White House on February 13, 2017, nearly a month after Trump was sworn into the office. Trump has ruffled relations with Canada with tariffs on softwood lumber. Diafiltered Milk has also been brought up by Trump as an area that needs to be negotiated. Trump is expected to renegotiate NAFTA with Canada. In June 2018, after Trudeau explained that Canadians would not be "pushed around" by the Trump tariffs on Canada's aluminum and steel, Trump labelled Trudeau as "dishonest" and "meek", and accused Trudeau of making "false statements", although it is unclear which statements Trump was referring to. Trump's adviser on trade, Peter Navarro, said that there was a "special place in hell" for Trudeau as he employed "bad faith diplomacy with President Donald J. Trump and then tries to stab him in the back on the way out the door ... that comes right from Air Force One." Days later, Trump said that Trudeau's comments are "going to cost a lot of money for the people of Canada". The Canadian military, like forces of other NATO countries, fought alongside the United States in most major conflicts since World War II, including the Korean War, the Gulf War, the Kosovo War, and most recently the war in Afghanistan. The main exceptions to this were the Canadian government's opposition to the Vietnam War and the Iraq War, which caused some brief diplomatic tensions. Despite these issues, military relations have remained close. American defense arrangements with Canada are more extensive than with any other country. The Permanent Joint Board of Defense, established in 1940, provides policy-level consultation on bilateral defense matters. The United States and Canada share North Atlantic Treaty Organization (NATO) mutual security commitments. In addition, American and Canadian military forces have cooperated since 1958 on continental air defense within the framework of the North American Aerospace Defense Command (NORAD). Canadian forces have provided indirect support for the American invasion of Iraq that began in 2003. Moreover, interoperability with the American armed forces has been a guiding principle of Canadian military force structuring and doctrine since the end of the Cold War. Canadian navy frigates, for instance, integrate seamlessly into American carrier battle groups. In commemoration of the 200th Anniversary of the War of 1812 ambassadors from Canada and the US, and naval officers from both countries gathered at the Pritzker Military Library on August 17, 2012, for a panel discussion on Canada-US relations with emphasis on national security-related matters. Also as part of the commemoration, the navies of both countries sailed together throughout the Great Lakes region. Canada's elite JTF2 unit joined American special forces in Afghanistan shortly after the al-Qaida attacks on September 11, 2001. Canadian forces joined the multinational coalition in Operation Anaconda in January 2002. On April 18, 2002, an American pilot bombed Canadian forces involved in a training exercise, killing four and wounding eight Canadians. A joint American-Canadian inquiry determined the cause of the incident to be pilot error, in which the pilot interpreted ground fire as an attack; the pilot ignored orders that he felt were "second-guessing" his field tactical decision. Canadian forces assumed a six-month command rotation of the International Security Assistance Force in 2003; in 2005, Canadians assumed operational command of the multi-national Brigade in Kandahar, with 2,300 troops, and supervises the Provincial Reconstruction Team in Kandahar, where al-Qaida forces are most active. Canada has also deployed naval forces in the Persian Gulf since 1991 in support of the UN Gulf Multinational Interdiction Force. The Canadian Embassy in Washington, D.C. maintains a public relations website named CanadianAlly.com, which is intended "to give American citizens a better sense of the scope of Canada's role in North American and Global Security and the War on Terror". The New Democratic Party and some recent Liberal leadership candidates have expressed opposition to Canada's expanded role in the Afghan conflict on the ground that it is inconsistent with Canada's historic role (since the Second World War) of peacekeeping operations. According to contemporary polls, 71% of Canadians were opposed to the 2003 invasion of Iraq. Many Canadians, and the former Liberal Cabinet headed by Paul Martin (as well as many Americans such as Bill Clinton and Barack Obama), made a policy distinction between conflicts in Afghanistan and Iraq, unlike the Bush Doctrine, which linked these together in a "Global war on terror". Canada has been involved in international responses to the threats from Daesh/ISIS/ISIL in Syria and Iraq, and is a member of the Global Coalition to Counter Daesh. In October 2016, Foreign Affairs Minister Dion and National Defence Minister Sajjan meet U.S. special envoy for this coalition. The Americans thanked Canada "for the role of Canadian Armed Forces (CAF) in providing training and assistance to Iraqi security forces, as well as the CAF's role in improving essential capacity-building capabilities with regional forces." Canada and the United States have the world's largest trading relationship, with huge quantities of goods and people flowing across the border each year. Since the 1987 Canada–United States Free Trade Agreement, there have been no tariffs on most goods passed between the two countries. In the course of the softwood lumber dispute, the U.S. has placed tariffs on Canadian softwood lumber because of what it argues is an unfair Canadian government subsidy, a claim which Canada disputes. The dispute has cycled through several agreements and arbitration cases. Other notable disputes include the Canadian Wheat Board, and Canadian cultural "restrictions" on magazines and television (See CRTC, CBC, and National Film Board of Canada). Canadians have been criticized about such things as the ban on beef since a case of Mad Cow disease was discovered in 2003 in cows from the United States (and a few subsequent cases) and the high American agricultural subsidies. Concerns in Canada also run high over aspects of the North American Free Trade Agreement (NAFTA) such as Chapter 11. A principal instrument of this cooperation is the International Joint Commission (IJC), established as part of the Boundary Waters Treaty of 1909 to resolve differences and promote international cooperation on boundary waters. The Great Lakes Water Quality Agreement of 1972 is another historic example of joint cooperation in controlling trans-border water pollution. However, there have been some disputes. Most recently, the Devil's Lake Outlet, a project instituted by North Dakota, has angered Manitobans who fear that their water may soon become polluted as a result of this project. Beginning in 1986 the Canadian government of Brian Mulroney began pressing the Reagan administration for an "Acid Rain Treaty" in order to do something about U.S. industrial air pollution causing acid rain in Canada. The Reagan administration was hesitant, and questioned the science behind Mulroney's claims. However, Mulroney was able to prevail. The product was the signing and ratification of the Air Quality Agreement of 1991 by the first Bush administration. Under that treaty, the two governments consult semi-annually on trans-border air pollution, which has demonstrably reduced acid rain, and they have since signed an annex to the treaty dealing with ground level ozone in 2000. Despite this, trans-border air pollution remains an issue, particularly in the Great Lakes-St. Lawrence watershed during the summer. The main source of this trans-border pollution results from coal-fired power stations, most of them located in the Midwestern United States. As part of the negotiations to create NAFTA, Canada and the U.S. signed, along with Mexico, the North American Agreement On Environmental Cooperation which created the Commission for Environmental Cooperation which monitors environmental issues across the continent, publishing the North American Environmental Atlas as one aspect of its monitoring duties. Currently neither of the countries' governments support the Kyoto Protocol, which set out time scheduled curbing of greenhouse gas emissions. Unlike the United States, Canada has ratified the agreement. Yet after ratification, due to internal political conflict within Canada, the Canadian government does not enforce the Kyoto Protocol, and has received criticism from environmental groups and from other governments for its climate change positions. In January 2011, the Canadian minister of the environment, Peter Kent, explicitly stated that the policy of his government with regards to greenhouse gas emissions reductions is to wait for the United States to act first, and then try to harmonize with that action – a position that has been condemned by environmentalists and Canadian nationalists, and as well as scientists and government think-tanks. The United States and Britain had a long-standing dispute about the rights of Americans fishing in the waters near Newfoundland. Before 1776, there was no question that American fishermen, mostly from Massachusetts, had rights to use the waters off Newfoundland. In the peace treaty negotiations of 1783, the Americans insisted on a statement of these rights. However, France, an American ally, disputed the American position because France had its own specified rights in the area and wanted them to be exclusive. The Treaty of Paris (1783) gave the Americans not rights, but rather "liberties" to fish within the territorial waters of British North America and to dry fish on certain coasts. After the War of 1812, the Convention of 1818 between the United States and Britain specified exactly what liberties were involved. Canadian and Newfoundland fishermen contested these liberties in the 1830s and 1840s. The Canadian–American Reciprocity Treaty of 1854, and the Treaty of Washington of 1871 spelled-out the liberties in more detail. However the Treaty of Washington expired in 1885, and there was a continuous round of disputes over jurisdictions and liberties. Britain and the United States sent the issue to the Permanent Court of Arbitration in The Hague in 1909. It produced a compromise settlement that permanently ended the problems. In 2003, the American government became concerned when members of the Canadian government announced plans to decriminalize marijuana. David Murray, an assistant to U.S. Drug Czar John P. Walters, said in a CBC interview that, "We would have to respond. We would be forced to respond." However, the election of the Conservative Party in early 2006 halted the liberalization of marijuana laws for the foreseeable future. A 2007 joint report by American and Canadian officials on cross-border drug smuggling indicated that, despite their best efforts, "drug trafficking still occurs in significant quantities in both directions across the border. The principal illicit substances smuggled across our shared border are MDMA ("Ecstasy"), cocaine, and marijuana." The report indicated that Canada was a major producer of "Ecstasy" and marijuana for the U.S. market, while the U.S. was a transit country for cocaine entering Canada. Presidents and prime ministers typically make formal or informal statements that indicate the diplomatic policy of their administration. Diplomats and journalists at the time—and historians since—dissect the nuances and tone to detect the warmth or coolness of the relationship. Canada's first Prime Minister also said: United States President George W. Bush was "deeply disliked" by a majority of Canadians according to the "Arizona Daily Sun". A 2004 poll found that more than two thirds of Canadians favoured Democrat John Kerry over Bush in the 2004 presidential election, with Bush's lowest approval ratings in Canada being in the province of Quebec where just 11% of the population supported him. Canadian public opinion of Barack Obama was significantly more positive. A 2012 poll found that 65% of Canadians would vote for Obama in the 2012 presidential election "if they could" while only 9% of Canadians would vote for his Republican opponent Mitt Romney. The same study found that 61% of Canadians felt that the Obama administration had been "good" for America, while only 12% felt it had been "bad". Similarly, a Pew Research poll conducted in June 2016 found that 83% of Canadians were "confident in Obama to do the right thing regarding world affairs". The study also found that a majority of members of all three major Canadian political parties supported Obama, and also found that Obama had slightly higher approval ratings in Canada in 2012 than he did in 2008. John Ibbitson of "The Globe and Mail" stated in 2012 that Canadians generally supported Democratic presidents over Republican presidents, citing how President Richard Nixon was "never liked" in Canada and that Canadians generally did not approve of Prime Minister Brian Mulroney's friendship with President Ronald Reagan. A November 2016 poll found 82% of Canadians preferred Hillary Clinton over Donald Trump. A January 2017 poll found that 66% of Canadians "disapproved" of Donald Trump, with 23% approving of him and 11% being "unsure". The poll also found that only 18% of Canadians believed Trump's presidency would have a positive impact on Canada, while 63% believed it would have a negative effect. A July 2019 poll found 79% of Canadians preferred Joe Biden or Bernie Sanders over Trump. These include maritime boundary disputes: Territorial land disputes: and disputes over the international status of the: A long-simmering dispute between Canada and the U.S. involves the issue of Canadian sovereignty over the Northwest Passage (the sea passages in the Arctic). Canada's assertion that the Northwest Passage represents internal (territorial) waters has been challenged by other countries, especially the U.S., which argue that these waters constitute an international strait (international waters). Canadians were alarmed when Americans drove the reinforced oil tanker through the Northwest Passage in 1969, followed by the icebreaker Polar Sea in 1985, which actually resulted in a minor diplomatic incident. In 1970, the Canadian parliament enacted the Arctic Waters Pollution Prevention Act, which asserts Canadian regulatory control over pollution within a 100-mile zone. In response, the United States in 1970 stated, "We cannot accept the assertion of a Canadian claim that the Arctic waters are internal waters of Canada. ... Such acceptance would jeopardize the freedom of navigation essential for United States naval activities worldwide." A compromise of sorts was reached in 1988, by an agreement on "Arctic Cooperation," which pledges that voyages of American icebreakers "will be undertaken with the consent of the Government of Canada." However the agreement did not alter either country's basic legal position. Paul Cellucci, the American ambassador to Canada, in 2005 suggested to Washington that it should recognize the straits as belonging to Canada. His advice was rejected and Harper took opposite positions. The U.S. opposes Harper's proposed plan to deploy military icebreakers in the Arctic to detect interlopers and assert Canadian sovereignty over those waters. Canada and the United States both hold membership in a number of multinational organizations such as: Canada's chief diplomatic mission to the United States is the Canadian Embassy in Washington, D.C.. It is further supported by many consulates located through United States. The Canadian Government maintains consulates-general in several major U.S. cities including: Atlanta, Boston, Chicago, Dallas, Denver, Detroit, Los Angeles, Miami, Minneapolis, New York City, San Francisco and Seattle. Canadian consular services are also available in Honolulu at the consulate of Australia through the Canada–Australia Consular Services Sharing Agreement. There are also Canadian trade offices located in Houston, Palo Alto and San Diego. The United States's chief diplomatic mission to Canada is the United States Embassy in Ottawa. It is further supported by many consulates located throughout Canada. The U.S government maintains consulates-general in several major Canadian cities including: Calgary, Halifax, Montreal, Quebec City, Toronto, Vancouver and Winnipeg. The United States also maintains Virtual Presence Posts (VPP) in the: Northwest Territories, Nunavut, Southwestern Ontario and Yukon.
https://en.wikipedia.org/wiki?curid=5199
Christianity Christianity is an Abrahamic monotheistic religion based on the life and teachings of Jesus of Nazareth. Its adherents, known as Christians, believe that Jesus is the Christ, whose coming as the messiah was prophesied in the Hebrew Bible, called the Old Testament in Christianity, and chronicled in the New Testament. It is the world's largest religion, with about 2.4 billion followers. Christianity remains culturally diverse in its Western and Eastern branches, as well as in its doctrines concerning justification and the nature of salvation, ecclesiology, ordination, and Christology. Their creeds generally hold in common Jesus as the Son of God—the logos incarnated—who ministered, suffered, and died on a cross, but rose from the dead for the salvation of mankind; as referred to as the gospel, meaning the "good news", in the Bible. Describing Jesus' life and teachings are the four canonical gospels of Matthew, Mark, Luke and John with the Jewish Old Testament as the gospel's respected background. Christianity began as a Second Temple Judaic sect in the 1st century in the Roman province of Judea. Jesus' apostles and their followers spread around the Levant, Europe, Anatolia, Mesopotamia, Transcaucasia, Egypt, and Ethiopia, despite initial persecution. It soon attracted gentile God-fearers, which led to a departure from Jewish customs, and, after the Fall of Jerusalem, AD 70 which ended the Temple-based Judaism, Christianity slowly separated from Judaism. Emperor Constantine the Great decriminalized Christianity in the Roman Empire by the Edict of Milan (313), later convening the Council of Nicaea (325) where Early Christianity was consolidated into what would become the State church of the Roman Empire (380). The early history of Christianity's united church before major schisms is sometimes referred to as the "Great Church". The Church of the East split after the Council of Ephesus (431) and Oriental Orthodoxy split after the Council of Chalcedon (451) over differences in Christology, while the Eastern Orthodox Church and the Catholic Church separated in the East–West Schism (1054), especially over the authority of the bishop of Rome. Protestantism split in numerous denominations from the Latin Catholic Church in the Reformation era (16th century) over theological and ecclesiological disputes, most predominantly on the issue of justification and papal primacy. Christianity played a prominent role in the development of Western civilization, particularly in Europe from late antiquity and the Middle Ages. Following the Age of Discovery (15th–17th century), Christianity was spread into the Americas, Oceania, sub-Saharan Africa, and the rest of the world via missionary work. The four largest branches of Christianity are the Catholic Church (1.3 billion/50.1%), Protestantism (920 million/36.7%), the Eastern Orthodox Church (230 million) and Oriental Orthodoxy (62 million/Orthodoxy combined at 11.9%), amid various efforts toward unity (ecumenism). Despite a decline in adherence in the West, Christianity remains the dominant religion in the region, with about 70% of the population identifying as Christian. Christianity is growing in Africa and Asia, the world's most populous continents. Christians remain persecuted in some regions the world, especially in the Middle-East, North Africa, East Asia, and South Asia. Early Jewish Christians referred to themselves as 'The Way' (), probably coming from , "prepare the way of the Lord." According to , the term "Christian" () was first used in reference to Jesus's disciples in the city of Antioch, meaning "followers of Christ," by the non-Jewish inhabitants of Antioch. The earliest recorded use of the term "Christianity" () was by Ignatius of Antioch, in around 100 AD. While Christians worldwide share basic convictions, there are also differences of interpretations and opinions of the Bible and sacred traditions on which Christianity is based. Concise doctrinal statements or confessions of religious beliefs are known as creeds. They began as baptismal formulae and were later expanded during the Christological controversies of the 4th and 5th centuries to become statements of faith. The Apostles' Creed is the most widely accepted statement of the articles of Christian faith. It is used by a number of Christian denominations for both liturgical and catechetical purposes, most visibly by liturgical churches of Western Christian tradition, including the Latin Church of the Catholic Church, Lutheranism, Anglicanism, and Western Rite Orthodoxy. It is also used by Presbyterians, Methodists, and Congregationalists. This particular creed was developed between the 2nd and 9th centuries. Its central doctrines are those of the Trinity and God the Creator. Each of the doctrines found in this creed can be traced to statements current in the apostolic period. The creed was apparently used as a summary of Christian doctrine for baptismal candidates in the churches of Rome. Its points include: The Nicene Creed was formulated, largely in response to Arianism, at the Councils of Nicaea and Constantinople in 325 and 381 respectively, and ratified as the universal creed of Christendom by the First Council of Ephesus in 431. The Chalcedonian Definition, or Creed of Chalcedon, developed at the Council of Chalcedon in 451, though rejected by the Oriental Orthodox, taught Christ "to be acknowledged in two natures, inconfusedly, unchangeably, indivisibly, inseparably": one divine and one human, and that both natures, while perfect in themselves, are nevertheless also perfectly united into one person. The Athanasian Creed, received in the Western Church as having the same status as the Nicene and Chalcedonian, says: "We worship one God in Trinity, and Trinity in Unity; neither confounding the Persons nor dividing the Substance." Most Christians (Catholic, Eastern Orthodox, Oriental Orthodox, and Protestant alike) accept the use of creeds, and subscribe to at least one of the creeds mentioned above. Many Evangelical Protestants reject creeds as definitive statements of faith, even while agreeing with some or all of the substance of the creeds. Most Baptists do not use creeds "in that they have not sought to establish binding authoritative confessions of faith on one another." Also rejecting creeds are groups with roots in the Restoration Movement, such as the Christian Church (Disciples of Christ), the Evangelical Christian Church in Canada, and the Churches of Christ. The central tenet of Christianity is the belief in Jesus as the Son of God and the Messiah (Christ). Christians believe that Jesus, as the Messiah, was anointed by God as savior of humanity and hold that Jesus' coming was the fulfillment of messianic prophecies of the Old Testament. The Christian concept of messiah differs significantly from the contemporary Jewish concept. The core Christian belief is that through belief in and acceptance of the death and resurrection of Jesus, sinful humans can be reconciled to God, and thereby are offered salvation and the promise of eternal life. While there have been many theological disputes over the nature of Jesus over the earliest centuries of Christian history, generally, Christians believe that Jesus is God incarnate and "true God and true man" (or both fully divine and fully human). Jesus, having become fully human, suffered the pains and temptations of a mortal man, but did not sin. As fully God, he rose to life again. According to the New Testament, he rose from the dead, ascended to heaven, is seated at the right hand of the Father, and will ultimately return to fulfill the rest of the Messianic prophecy, including the resurrection of the dead, the Last Judgment, and the final establishment of the Kingdom of God. According to the canonical gospels of Matthew and Luke, Jesus was conceived by the Holy Spirit and born from the Virgin Mary. Little of Jesus' childhood is recorded in the canonical gospels, although infancy gospels were popular in antiquity. In comparison, his adulthood, especially the week before his death, is well documented in the gospels contained within the New Testament, because that part of his life is believed to be most important. The biblical accounts of Jesus' ministry include: his baptism, miracles, preaching, teaching, and deeds. Christians consider the resurrection of Jesus to be the cornerstone of their faith (see 1 Corinthians 15) and the most important event in history. Among Christian beliefs, the death and resurrection of Jesus are two core events on which much of Christian doctrine and theology is based. According to the New Testament, Jesus was crucified, died a physical death, was buried within a tomb, and rose from the dead three days later. The New Testament mentions several post-resurrection appearances of Jesus on different occasions to his twelve apostles and disciples, including "more than five hundred brethren at once", before Jesus' ascension to heaven. Jesus' death and resurrection are commemorated by Christians in all worship services, with special emphasis during Holy Week, which includes Good Friday and Easter Sunday. The death and resurrection of Jesus are usually considered the most important events in Christian theology, partly because they demonstrate that Jesus has power over life and death and therefore has the authority and power to give people eternal life. Christian churches accept and teach the New Testament account of the resurrection of Jesus with very few exceptions. Some modern scholars use the belief of Jesus' followers in the resurrection as a point of departure for establishing the continuity of the historical Jesus and the proclamation of the early church. Some liberal Christians do not accept a literal bodily resurrection, seeing the story as richly symbolic and spiritually nourishing myth. Arguments over death and resurrection claims occur at many religious debates and interfaith dialogues. Paul the Apostle, an early Christian convert and missionary, wrote, "If Christ was not raised, then all our preaching is useless, and your trust in God is useless." Paul the Apostle, like Jews and Roman pagans of his time, believed that sacrifice can bring about new kinship ties, purity, and eternal life. For Paul, the necessary sacrifice was the death of Jesus: Gentiles who are "Christ's" are, like Israel, descendants of Abraham and "heirs according to the promise". The God who raised Jesus from the dead would also give new life to the "mortal bodies" of Gentile Christians, who had become with Israel, the "children of God", and were therefore no longer "in the flesh". Modern Christian churches tend to be much more concerned with how humanity can be saved from a universal condition of sin and death than the question of how both Jews and Gentiles can be in God's family. According to Eastern Orthodox theology, based upon their understanding of the atonement as put forward by Irenaeus' recapitulation theory, Jesus' death is a ransom. This restores the relation with God, who is loving and reaches out to humanity, and offers the possibility of "theosis" c.q. divinization, becoming the kind of humans God wants humanity to be. According to Catholic doctrine, Jesus' death satisfies the wrath of God, aroused by the offense to God's honor caused by human's sinfulness. The Catholic Church teaches that salvation does not occur without faithfulness on the part of Christians; converts must live in accordance with principles of love and ordinarily must be baptized. In Protestant theology, Jesus' death is regarded as a substitutionary penalty carried by Jesus, for the debt that has to be paid by humankind when it broke God's moral law. Martin Luther taught that baptism was necessary for salvation, but modern Lutherans and other Protestants tend to teach that salvation is a gift that comes to an individual by God's grace, sometimes defined as "unmerited favor", even apart from baptism. Christians differ in their views on the extent to which individuals' salvation is pre-ordained by God. Reformed theology places distinctive emphasis on grace by teaching that individuals are completely incapable of self-redemption, but that sanctifying grace is irresistible. In contrast Catholics, Orthodox Christians, and Arminian Protestants believe that the exercise of free will is necessary to have faith in Jesus. "Trinity" refers to the teaching that the one God comprises three distinct, eternally co-existing persons: the "Father", the "Son" (incarnate in Jesus Christ), and the "Holy Spirit". Together, these three persons are sometimes called the Godhead, although there is no single term in use in Scripture to denote the unified Godhead. In the words of the Athanasian Creed, an early statement of Christian belief, "the Father is God, the Son is God, and the Holy Spirit is God, and yet there are not three Gods but one God". They are distinct from another: the Father has no source, the Son is begotten of the Father, and the Spirit proceeds from the Father. Though distinct, the three persons cannot be divided from one another in being or in operation. While some Christians also believe that God appeared as the Father in the Old Testament, it is agreed that he appeared as the Son in the New Testament, and will still continue to manifest as the Holy Spirit in the present. But still, God still existed as three persons in each of these times. However, traditionally there is a belief that it was the Son who appeared in the Old Testament because, for example, when the Trinity is depicted in art, the Son typically has the distinctive appearance, a cruciform halo identifying Christ, and in depictions of the Garden of Eden, this looks forward to an Incarnation yet to occur. In some Early Christian sarcophagi the Logos is distinguished with a beard, "which allows him to appear ancient, even pre-existent." The Trinity is an essential doctrine of mainstream Christianity. From earlier than the times of the Nicene Creed (325) Christianity advocated the triune mystery-nature of God as a normative profession of faith. According to Roger E. Olson and Christopher Hall, through prayer, meditation, study and practice, the Christian community concluded "that God must exist as both a unity and trinity", codifying this in ecumenical council at the end of the 4th century. According to this doctrine, God is not divided in the sense that each person has a third of the whole; rather, each person is considered to be fully God (see Perichoresis). The distinction lies in their relations, the Father being unbegotten; the Son being begotten of the Father; and the Holy Spirit proceeding from the Father and (in Western Christian theology) from the Son. Regardless of this apparent difference, the three "persons" are each eternal and omnipotent. Other Christian religions including Unitarian Universalism, Jehovah's Witnesses, and Mormonism, do not share those views on the Trinity. The Greek word "trias" is first seen in this sense in the works of Theophilus of Antioch; his text reads: "of the Trinity, of God, and of His Word, and of His Wisdom". The term may have been in use before this time; its Latin equivalent, "trinitas", appears afterwards with an explicit reference to the Father, the Son, and the Holy Spirit, in Tertullian. In the following century, the word was in general use. It is found in many passages of Origen. "Trinitarianism" denotes Christians who believe in the concept of the Trinity. Almost all Christian denominations and churches hold Trinitarian beliefs. Although the words "Trinity" and "Triune" do not appear in the Bible, beginning in the 3rd century theologians developed the term and concept to facilitate comprehension of the New Testament teachings of God as being Father, Son, and Holy Spirit. Since that time, Christian theologians have been careful to emphasize that Trinity does not imply that there are three gods (the antitrinitarian heresy of Tritheism), nor that each hypostasis of the Trinity is one-third of an infinite God (partialism), nor that the Son and the Holy Spirit are beings created by and subordinate to the Father (Arianism). Rather, the Trinity is defined as one God in three persons. "Nontrinitarianism" (or "antitrinitarianism") refers to theology that rejects the doctrine of the Trinity. Various nontrinitarian views, such as adoptionism or modalism, existed in early Christianity, leading to the disputes about Christology. Nontrinitarianism reappeared in the Gnosticism of the Cathars between the 11th and 13th centuries, among groups with Unitarian theology in the Protestant Reformation of the 16th century, in the 18th-century Enlightenment, and in some groups arising during the Second Great Awakening of the 19th century. The end of things, whether the end of an individual life, the end of the age, or the end of the world, broadly speaking, is Christian eschatology; the study of the destiny of humans as it is revealed in the Bible. The major issues in Christian eschatology are the Tribulation, death and the afterlife, (mainly for Evangelical groups) the Millennium and the following Rapture, the Second Coming of Jesus, Resurrection of the Dead, Heaven, (for liturgical branches) Purgatory, and Hell, the Last Judgment, the end of the world, and the New Heavens and New Earth. Christians believe that the second coming of Christ will occur at the end of time, after a period of severe persecution (the Great Tribulation). All who have died will be resurrected bodily from the dead for the Last Judgment. Jesus will fully establish the Kingdom of God in fulfillment of scriptural prophecies. Most Christians believe that human beings experience divine judgment and are rewarded either with eternal life or eternal damnation. This includes the general judgement at the resurrection of the dead as well as the belief (held by Catholics, Orthodox and most Protestants) in a judgment particular to the individual soul upon physical death. In the liturgical branches (e.g. Catholicism or Eastern or Oriental Orthodoxy), those who die in a state of grace, i.e., without any mortal sin separating them from God, but are still imperfectly purified from the effects of sin, undergo purification through the intermediate state of purgatory to achieve the holiness necessary for entrance into God's presence. Those who have attained this goal are called "saints" (Latin "sanctus", "holy"). Some Christian groups, such as Seventh-day Adventists, hold to mortalism, the belief that the human soul is not naturally immortal, and is unconscious during the intermediate state between bodily death and resurrection. These Christians also hold to Annihilationism, the belief that subsequent to the final judgement, the wicked will cease to exist rather than suffer everlasting torment. Jehovah's Witnesses hold to a similar view. Depending on the specific denomination of Christianity, practices may include baptism, the Eucharist (Holy Communion or the Lord's Supper), prayer (including the Lord's Prayer), confession, confirmation, burial rites, marriage rites and the religious education of children. Most denominations have ordained clergy who lead regular communal worship services. Services of worship typically follow a pattern or form known as liturgy. Justin Martyr described 2nd-century Christian liturgy in his "First Apology" () to Emperor Antoninus Pius, and his description remains relevant to the basic structure of Christian liturgical worship: Thus, as Justin described, Christians assemble for communal worship typically on Sunday, the day of the resurrection, though other liturgical practices often occur outside this setting. Scripture readings are drawn from the Old and New Testaments, but especially the gospels. Instruction is given based on these readings, called a sermon or homily. There are a variety of congregational prayers, including thanksgiving, confession, and intercession, which occur throughout the service and take a variety of forms including recited, responsive, silent, or sung. Psalms, hymns, or worship songs may be sung. Services can be varied for special events like significant feast days. Nearly all forms of worship incorporate the Eucharist, which consists of a meal. It is reenacted in accordance with Jesus' instruction at the Last Supper that his followers do in remembrance of him as when he gave his disciples bread, saying, "This is my body", and gave them wine saying, "This is my blood". In the early church, Christians and those yet to complete initiation would separate for the Eucharistic part of the service. Some denominations continue to practice 'closed communion'. They offer communion to those who are already united in that denomination or sometimes individual church. Catholics restrict participation to their members who are not in a state of mortal sin. Many other churches practice 'open communion' since they view communion as a means to unity, rather than an end, and invite all believing Christians to participate. In Christian belief and practice, a "sacrament" is a rite, instituted by Christ, that confers grace, constituting a sacred mystery. The term is derived from the Latin word "sacramentum", which was used to translate the Greek word for "mystery". Views concerning both which rites are sacramental, and what it means for an act to be a sacrament, vary among Christian denominations and traditions. The most conventional functional definition of a sacrament is that it is an outward sign, instituted by Christ, that conveys an inward, spiritual grace through Christ. The two most widely accepted sacraments are Baptism and the Eucharist, however, the majority of Christians also recognize five additional sacraments: Confirmation (Chrismation in the Orthodox tradition), Holy Orders (or ordination), Penance (or Confession), Anointing of the Sick, and Matrimony (see Christian views on marriage). Taken together, these are the Seven Sacraments as recognized by churches in the High Church tradition—notably Catholic, Eastern Orthodox, Oriental Orthodox, Independent Catholic, Old Catholic, many Anglicans, and some Lutherans. Most other denominations and traditions typically affirm only Baptism and Eucharist as sacraments, while some Protestant groups, such as the Quakers, reject sacramental theology. Christian denominations, such as Baptists, which believe these rites do not communicate grace, prefer to call Baptism and Holy Communion "ordinances" rather than sacraments. In addition to this, the Church of the East has two additional sacraments in place of the traditional sacraments of Matrimony and the Anointing of the Sick. These include Holy Leaven (Melka) and the sign of the cross. Catholics, Eastern Christians, Lutherans, Anglicans and other traditional Protestant communities frame worship around the liturgical year. The liturgical cycle divides the year into a series of seasons, each with their theological emphases, and modes of prayer, which can be signified by different ways of decorating churches, colors of paraments and vestments for clergy, scriptural readings, themes for preaching and even different traditions and practices often observed personally or in the home. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church, and Eastern Christians use analogous calendars based on the cycle of their respective rites. Calendars set aside holy days, such as solemnities which commemorate an event in the life of Jesus, Mary, or the saints, and periods of fasting, such as Lent and other pious events such as memoria, or lesser festivals commemorating saints. Christian groups that do not follow a liturgical tradition often retain certain celebrations, such as Christmas, Easter, and Pentecost: these are the celebrations of Christ's birth, resurrection, and the descent of the Holy Spirit upon the Church, respectively. A few denominations make no use of a liturgical calendar. Christianity has not generally practiced aniconism, the avoidance or prohibition of devotional images, even if early Jewish Christians and some modern denominations, invoking the Decalogue's prohibition of idolatry, avoided figures in their symbols. The cross, today one of the most widely recognized symbols, was used by Christians from the earliest times. Tertullian, in his book "De Corona", tells how it was already a tradition for Christians to trace the sign of the cross on their foreheads. Although the cross was known to the early Christians, the crucifix did not appear in use until the 5th century. Among the earliest Christian symbols, that of the fish or Ichthys seems to have ranked first in importance, as seen on monumental sources such as tombs from the first decades of the 2nd century. Its popularity seemingly arose from the Greek word "ichthys" (fish) forming an acronym for the Greek phrase "Iesous Christos Theou Yios Soter" (Ἰησοῦς Χριστός, Θεοῦ Υἱός, Σωτήρ), (Jesus Christ, Son of God, Savior), a concise summary of Christian faith. Other major Christian symbols include the chi-rho monogram, the dove (symbolic of the Holy Spirit), the sacrificial lamb (representing Christ's sacrifice), the vine (symbolizing the connection of the Christian with Christ) and many others. These all derive from passages of the New Testament. Baptism is the ritual act, with the use of water, by which a person is admitted to membership of the Church. Beliefs on baptism vary among denominations. Differences occur firstly on whether the act has any spiritual significance. Some, such as the Catholic and Eastern Orthodox churches, as well as Lutherans and Anglicans, hold to the doctrine of baptismal regeneration, which affirms that baptism creates or strengthens a person's faith, and is intimately linked to salvation. Others view baptism as a purely symbolic act, an external public declaration of the inward change which has taken place in the person, but not as spiritually efficacious. Secondly, there are differences of opinion on the methodology of the act. These methods are: by "immersion"; if immersion is total, by "submersion"; by affusion (pouring); and by aspersion (sprinkling). Those who hold the first view may also adhere to the tradition of infant baptism; the Orthodox Churches all practice infant baptism and always baptize by total immersion repeated three times in the name of the Father, the Son, and the Holy Spirit. The Catholic Church also practices infant baptism, usually by affusion, and utilizing the Trinitarian formula. Jesus' teaching on prayer in the Sermon on the Mount displays a distinct lack of interest in the external aspects of prayer. A concern with the techniques of prayer is condemned as "pagan", and instead a simple trust in God's fatherly goodness is encouraged. Elsewhere in the New Testament, this same freedom of access to God is also emphasized. This confident position should be understood in light of Christian belief in the unique relationship between the believer and Christ through the indweling of the Holy Spirit. In subsequent Christian traditions, certain physical gestures are emphasized, including medieval gestures such as genuflection or making the sign of the cross. Kneeling, bowing, and prostrations (see also poklon) are often practiced in more traditional branches of Christianity. Frequently in Western Christianity, the hands are placed palms together and forward as in the feudal commendation ceremony. At other times the older orans posture may be used, with palms up and elbows in. "Intercessory prayer" is prayer offered for the benefit of other people. There are many intercessory prayers recorded in the Bible, including prayers of the Apostle Peter on behalf of sick persons and by prophets of the Old Testament in favor of other people. In the Epistle of James, no distinction is made between the intercessory prayer offered by ordinary believers and the prominent Old Testament prophet Elijah. The effectiveness of prayer in Christianity derives from the power of God rather than the status of the one praying. The ancient church, in both Eastern and Western Christianity, developed a tradition of asking for the intercession of (deceased) saints, and this remains the practice of most Eastern Orthodox, Oriental Orthodox, Catholic, and some Anglican churches. Churches of the Protestant Reformation, however, rejected prayer to the saints, largely on the basis of the sole mediatorship of Christ. The reformer Huldrych Zwingli admitted that he had offered prayers to the saints until his reading of the Bible convinced him that this was idolatrous. According to the "Catechism of the Catholic Church": "Prayer is the raising of one's mind and heart to God or the requesting of good things from God." The "Book of Common Prayer" in the Anglican tradition is a guide which provides a set order for services, containing set prayers, scripture readings, and hymns or sung Psalms. Christianity, like other religions, has adherents whose beliefs and biblical interpretations vary. Christianity regards the biblical canon, the Old Testament and the New Testament, as the inspired word of God. The traditional view of inspiration is that God worked through human authors so that what they produced was what God wished to communicate. The Greek word referring to inspiration in is "theopneustos", which literally means "God-breathed". Some believe that divine inspiration makes our present Bibles inerrant. Others claim inerrancy for the Bible in its original manuscripts, although none of those are extant. Still others maintain that only a particular translation is inerrant, such as the King James Version. Another closely related view is biblical infallibility or limited inerrancy, which affirms that the Bible is free of error as a guide to salvation, but may include errors on matters such as history, geography, or science. The books of the Bible accepted by the Orthodox, Catholic, and Protestant churches vary somewhat, with Jews accepting only the Hebrew Bible as canonical; however, there is substantial overlap. These variations are a reflection of the range of traditions, and of the councils that have convened on the subject. Every version of the Old Testament always includes the books of the Tanakh, the canon of the Hebrew Bible. The Catholic and Orthodox canons, in addition to the Tanakh, also include the deuterocanonical books as part of the Old Testament. These books appear in the Septuagint, but are regarded by Protestants to be apocryphal. However, they are considered to be important historical documents which help to inform the understanding of words, grammar, and syntax used in the historical period of their conception. Some versions of the Bible include a separate Apocrypha section between the Old Testament and the New Testament. The New Testament, originally written in Koine Greek, contains 27 books which are agreed upon by all churches. Modern scholarship has raised many issues with the Bible. While the King James Version is held to by many because of its striking English prose, in fact it was translated from the Erasmus Greek Bible, which in turn "was based on a single 12th Century manuscript that is one of the worst manuscripts we have available to us". Much scholarship in the past several hundred years has gone into comparing different manuscripts in order to reconstruct the original text. Another issue is that several books are considered to be forgeries. The injunction that women "be silent and submissive" in 1 Timothy 2 is thought by many to be a forgery by a follower of Paul, a similar phrase in 1 Corinthians 14, which is thought to be by Paul, appears in different places in different manuscripts and is thought to originally be a margin note by a copyist. Other verses in 1 Corinthians, such as 1 Corinthians 11:2–16 where women are instructed to wear a covering over their hair "when they pray or prophesies", contradict this verse. A final issue with the Bible is the way in which books were selected for inclusion in the New Testament. Other gospels have now been recovered, such as those found near Nag Hammadi in 1945, and while some of these texts are quite different from what Christians have been used to, it should be understood that some of this newly recovered Gospel material is quite possibly contemporaneous with, or even earlier than, the New Testament Gospels. The core of the Gospel of Thomas, in particular, may date from as early as AD 50 (although some major scholars contest this early dating), and if so would provide an insight into the earliest gospel texts that underlie the canonical Gospels, texts that are mentioned in Luke 1:1–2. The Gospel of Thomas contains much that is familiar from the canonical Gospels—verse 113, for example ("The Father's Kingdom is spread out upon the earth, but people do not see it"), is reminiscent of Luke 17:20–21—and the Gospel of John, with a terminology and approach that is suggestive of what was later termed "Gnosticism", has recently been seen as a possible response to the Gospel of Thomas, a text that is commonly labeled "proto-Gnostic". Scholarship, then, is currently exploring the relationship in the early church between mystical speculation and experience on the one hand and the search for church order on the other, by analyzing new-found texts, by subjecting canonical texts to further scrutiny, and by an examination of the passage of New Testament texts to canonical status. In antiquity, two schools of exegesis developed in Alexandria and Antioch. The Alexandrian interpretation, exemplified by Origen, tended to read Scripture allegorically, while the Antiochene interpretation adhered to the literal sense, holding that other meanings (called "theoria") could only be accepted if based on the literal meaning. Catholic theology distinguishes two senses of scripture: the literal and the spiritual. The "literal" sense of understanding scripture is the meaning conveyed by the words of Scripture. The "spiritual" sense is further subdivided into: Regarding exegesis, following the rules of sound interpretation, Catholic theology holds: Protestant Christians believe that the Bible is a self-sufficient revelation, the final authority on all Christian doctrine, and revealed all truth necessary for salvation. This concept is known as "sola scriptura". Protestants characteristically believe that ordinary believers may reach an adequate understanding of Scripture because Scripture itself is clear in its meaning (or "perspicuous"). Martin Luther believed that without God's help, Scripture would be "enveloped in darkness". He advocated for "one definite and simple understanding of Scripture". John Calvin wrote, "all who refuse not to follow the Holy Spirit as their guide, find in the Scripture a clear light". Related to this is "efficacy", that Scripture is able to lead people to faith; and "sufficiency", that the Scriptures contain everything that one needs to know in order to obtain salvation and to live a Christian life. Protestants stress the meaning conveyed by the words of Scripture, the historical-grammatical method. The historical-grammatical method or grammatico-historical method is an effort in Biblical hermeneutics to find the intended original meaning in the text. This original intended meaning of the text is drawn out through examination of the passage in light of the grammatical and syntactical aspects, the historical background, the literary genre, as well as theological (canonical) considerations. The historical-grammatical method distinguishes between the one original meaning and the significance of the text. The significance of the text includes the ensuing use of the text or application. The original passage is seen as having only a single meaning or sense. As Milton S. Terry said: "A fundamental principle in grammatico-historical exposition is that the words and sentences can have but one significance in one and the same connection. The moment we neglect this principle we drift out upon a sea of uncertainty and conjecture." Technically speaking, the grammatical-historical method of interpretation is distinct from the determination of the passage's significance in light of that interpretation. Taken together, both define the term (Biblical) hermeneutics. Some Protestant interpreters make use of typology. Christianity developed during the 1st century CE as a Jewish Christian sect of Second Temple Judaism. An early Jewish Christian community was founded in Jerusalem under the leadership of the Pillars of the Church, namely James the Just, the brother of the Lord, Peter, and John. Jewish Christianity soon attracted Gentile God-fearers, posing a problem for its Jewish religious outlook, which insisted on close observance of the Jewish commands. Paul the Apostle solved this by insisting that salvation by faith in Christ, and participation in his death and resurrection, sufficed. At first he persecuted the early Christians, but after a conversion experience he preached to the gentiles, and is regarded as having had a formative effect on the emerging Christian identity as separate from Judaism. Eventually, his departure from Jewish customs would result in the establishment of Christianity as an independent religion. This formative period was followed by the early bishops, whom Christians consider the successors of Christ's apostles. From the year 150, Christian teachers began to produce theological and apologetic works aimed at defending the faith. These authors are known as the Church Fathers, and the study of them is called patristics. Notable early Fathers include Ignatius of Antioch, Polycarp, Justin Martyr, Irenaeus, Tertullian, Clement of Alexandria and Origen. According to the New Testament, Christians were from the beginning, subject to persecution by some Jewish and Roman religious authorities. This involved punishments, including death, for Christians such as Stephen and James, son of Zebedee. Further widespread persecution of the Church occurred under nine subsequent Roman emperors, most intensely under Decius and Diocletian. Christianity spread to Aramaic-speaking peoples along the Mediterranean coast and also to the inland parts of the Roman Empire and beyond that into the Parthian Empire and the later Sasanian Empire, including Mesopotamia, which was dominated at different times and to varying extents by these empires. The presence of Christianity in Africa began in the middle of the 1st century in Egypt and by the end of the 2nd century in the region around Carthage. Mark the Evangelist is claimed to have started the Church of Alexandria in about 43 CE; various later churches claim this as their own legacy, including the Coptic Orthodox Church of Alexandria. Important Africans who influenced the early development of Christianity include Tertullian, Clement of Alexandria, Origen of Alexandria, Cyprian, Athanasius, and Augustine of Hippo. King Tiridates III made Christianity the state religion in Armenia between 301 and 314, thus Armenia became the first officially Christian state. It was not an entirely new religion in Armenia, having penetrated into the country from at least the third century, but it may have been present even earlier. Constantine I was exposed to Christianity in his youth, and throughout his life his support for the religion grew, culminating in baptism on his deathbed. During his reign, state-sanctioned persecution of Christians was ended with the Edict of Toleration in 311 and the Edict of Milan in 313. At that point, Christianity was still a minority belief, comprising perhaps only five percent of the Roman population. Influenced by his adviser Mardonius, Constantine's nephew Julian unsuccessfully tried to suppress Christianity. On 27 February 380, Theodosius I, Gratian, and Valentinian II established Nicene Christianity as the State church of the Roman Empire. As soon as it became connected to the state, Christianity grew wealthy; the Church solicited donations from the rich and could now own land. Constantine was also instrumental in the convocation of the First Council of Nicaea in 325, which sought to address Arianism and formulated the Nicene Creed, which is still used by in Catholicism, Eastern Orthodoxy, Lutheranism, Anglicanism, and many other Protestant churches. Nicaea was the first of a series of ecumenical councils, which formally defined critical elements of the theology of the Church, notably concerning Christology. The Church of the East did not accept the third and following ecumenical councils and is still separate today by its successors (Assyrian Church of the East). In terms of prosperity and cultural life, the Byzantine Empire was one of the peaks in Christian history and Christian civilization, and Constantinople remained the leading city of the Christian world in size, wealth, and culture. There was a renewed interest in classical Greek philosophy, as well as an increase in literary output in vernacular Greek. Byzantine art and literature held a preeminent place in Europe, and the cultural impact of Byzantine art on the West during this period was enormous and of long-lasting significance. The later rise of Islam in North Africa reduced the size and numbers of Christian congregations, leaving in large numbers only the Coptic Church in Egypt, the Ethiopian Orthodox Tewahedo Church in the Horn of Africa and the Nubian Church in the Sudan (Nobatia, Makuria and Alodia). With the decline and fall of the Roman Empire in the West, the papacy became a political player, first visible in Pope Leo's diplomatic dealings with Huns and Vandals. The church also entered into a long period of missionary activity and expansion among the various tribes. While Arianists instituted the death penalty for practicing pagans (see the Massacre of Verden, for example), what would later become Catholicism also spread among the Hungarians, the Germanic, the Celtic, the Baltic and some Slavic peoples. Around 500, St. Benedict set out his Monastic Rule, establishing a system of regulations for the foundation and running of monasteries. Monasticism became a powerful force throughout Europe, and gave rise to many early centers of learning, most famously in Ireland, Scotland, and Gaul, contributing to the Carolingian Renaissance of the 9th century. In the 7th century, Muslims conquered Syria (including Jerusalem), North Africa, and Spain, converting some of the Christian population to Islam, and placing the rest under a separate legal status. Part of the Muslims' success was due to the exhaustion of the Byzantine Empire in its decades long conflict with Persia. Beginning in the 8th century, with the rise of Carolingian leaders, the Papacy sought greater political support in the Frankish Kingdom. The Middle Ages brought about major changes within the church. Pope Gregory the Great dramatically reformed the ecclesiastical structure and administration. In the early 8th century, iconoclasm became a divisive issue, when it was sponsored by the Byzantine emperors. The Second Ecumenical Council of Nicaea (787) finally pronounced in favor of icons. In the early 10th century, Western Christian monasticism was further rejuvenated through the leadership of the great Benedictine monastery of Cluny. In the West, from the 11th century onward, some older cathedral schools became universities (see, for example, University of Oxford, University of Paris and University of Bologna). Previously, higher education had been the domain of Christian cathedral schools or monastic schools ("Scholae monasticae"), led by monks and nuns. Evidence of such schools dates back to the 6th century CE. These new universities expanded the curriculum to include academic programs for clerics, lawyers, civil servants, and physicians. The university is generally regarded as an institution that has its origin in the Medieval Christian setting. Accompanying the rise of the "new towns" throughout Europe, mendicant orders were founded, bringing the consecrated religious life out of the monastery and into the new urban setting. The two principal mendicant movements were the Franciscans and the Dominicans, founded by St. Francis and St. Dominic, respectively. Both orders made significant contributions to the development of the great universities of Europe. Another new order was the Cistercians, whose large isolated monasteries spearheaded the settlement of former wilderness areas. In this period, church building and ecclesiastical architecture reached new heights, culminating in the orders of Romanesque and Gothic architecture and the building of the great European cathedrals. Christian nationalism emerged during this era in which Christians felt the impulse to recover lands in which Christianity had historically flourished. From 1095 under the pontificate of Urban II, the Crusades were launched. These were a series of military campaigns in the Holy Land and elsewhere, initiated in response to pleas from the Byzantine Emperor Alexios I for aid against Turkish expansion. The Crusades ultimately failed to stifle Islamic aggression and even contributed to Christian enmity with the sacking of Constantinople during the Fourth Crusade. The Christian Church experienced internal conflict between the 7th and 13th centuries that resulted in a schism between the so-called Latin or Western Christian branch (the Catholic Church), and an Eastern, largely Greek, branch (the Eastern Orthodox Church). The two sides disagreed on a number of administrative, liturgical and doctrinal issues, most notably papal primacy of jurisdiction. The Second Council of Lyon (1274) and the Council of Florence (1439) attempted to reunite the churches, but in both cases, the Eastern Orthodox refused to implement the decisions, and the two principal churches remain in schism to the present day. However, the Catholic Church has achieved union with various smaller eastern churches. In the thirteenth century, a new emphasis on Jesus' suffering, exemplified by the Franciscans' preaching, had the consequence of turning worshippers' attention towards Jews, on whom Christians had placed the blame for Jesus' death. Christianity's limited tolerance of Jews was not new—Augustine of Hippo said that Jews should not be allowed to enjoy the citizenship that Christians took for granted—but the growing antipathy towards Jews was a factor that led to the expulsion of Jews from England in 1290, the first of many such expulsions in Europe. Beginning around 1184, following the crusade against Cathar heresy, various institutions, broadly referred to as the Inquisition, were established with the aim of suppressing heresy and securing religious and doctrinal unity within Christianity through conversion and prosecution. The 15th-century Renaissance brought about a renewed interest in ancient and classical learning. During the Reformation, Martin Luther posted the "Ninety-five Theses" 1517 against the sale of indulgences. Printed copies soon spread throughout Europe. In 1521 the Edict of Worms condemned and excommunicated Luther and his followers, resulting in the schism of the Western Christendom into several branches. Other reformers like Zwingli, Oecolampadius, Calvin, Knox, and Arminius further criticized Catholic teaching and worship. These challenges developed into the movement called Protestantism, which repudiated the primacy of the pope, the role of tradition, the seven sacraments, and other doctrines and practices. The Reformation in England began in 1534, when King Henry VIII had himself declared head of the Church of England. Beginning in 1536, the monasteries throughout England, Wales and Ireland were dissolved. Thomas Müntzer, Andreas Karlstadt and other theologians perceived both the Catholic Church and the confessions of the Magisterial Reformation as corrupted. Their activity brought about the Radical Reformation, which gave birth to various Anabaptist denominations. Partly in response to the Protestant Reformation, the Catholic Church engaged in a substantial process of reform and renewal, known as the Counter-Reformation or Catholic Reform. The Council of Trent clarified and reasserted Catholic doctrine. During the following centuries, competition between Catholicism and Protestantism became deeply entangled with political struggles among European states. Meanwhile, the discovery of America by Christopher Columbus in 1492 brought about a new wave of missionary activity. Partly from missionary zeal, but under the impetus of colonial expansion by the European powers, Christianity spread to the Americas, Oceania, East Asia and sub-Saharan Africa. Throughout Europe, the division caused by the Reformation led to outbreaks of religious violence and the establishment of separate state churches in Europe. Lutheranism spread into the northern, central, and eastern parts of present-day Germany, Livonia, and Scandinavia. Anglicanism was established in England in 1534. Calvinism and its varieties, such as Presbyterianism, were introduced in Scotland, the Netherlands, Hungary, Switzerland, and France. Arminianism gained followers in the Netherlands and Frisia. Ultimately, these differences led to the outbreak of conflicts in which religion played a key factor. The Thirty Years' War, the English Civil War, and the French Wars of Religion are prominent examples. These events intensified the Christian debate on persecution and toleration. In the era known as the Great Divergence, when in the West, the Age of Enlightenment and the scientific revolution brought about great societal changes, Christianity was confronted with various forms of skepticism and with certain modern political ideologies, such as versions of socialism and liberalism. Events ranged from mere anti-clericalism to violent outbursts against Christianity, such as the dechristianization of France during the French Revolution, the Spanish Civil War, and certain Marxist movements, especially the Russian Revolution and the persecution of Christians in the Soviet Union under state atheism. Especially pressing in Europe was the formation of nation states after the Napoleonic era. In all European countries, different Christian denominations found themselves in competition to greater or lesser extents with each other and with the state. Variables were the relative sizes of the denominations and the religious, political, and ideological orientation of the states. Urs Altermatt of the University of Fribourg, looking specifically at Catholicism in Europe, identifies four models for the European nations. In traditionally Catholic-majority countries such as Belgium, Spain, and Austria, to some extent, religious and national communities are more or less identical. Cultural symbiosis and separation are found in Poland, the Republic of Ireland, and Switzerland, all countries with competing denominations. Competition is found in Germany, the Netherlands, and again Switzerland, all countries with minority Catholic populations, which to a greater or lesser extent identified with the nation. Finally, separation between religion (again, specifically Catholicism) and the state is found to a great degree in France and Italy, countries where the state actively opposed itself to the authority of the Catholic Church. The combined factors of the formation of nation states and ultramontanism, especially in Germany and the Netherlands, but also in England to a much lesser extent, often forced Catholic churches, organizations, and believers to choose between the national demands of the state and the authority of the Church, specifically the papacy. This conflict came to a head in the First Vatican Council, and in Germany would lead directly to the "Kulturkampf", where liberals and Protestants under the leadership of Bismarck managed to severely restrict Catholic expression and organization. Christian commitment in Europe dropped as modernity and secularism came into their own, particularly in Czechia and Estonia, while religious commitments in America have been generally high in comparison to Europe. The late 20th century has shown the shift of Christian adherence to the Third World and the Southern Hemisphere in general, with the West no longer the chief standard bearer of Christianity. Approximately 7 to 10% of Arabs are Christians, most prevalent in Egypt, Syria and Lebanon. With around 2.4 billion adherents, split into three main branches of Catholic, Protestant, and Eastern Orthodox, Christianity is the world's largest religion. The Christian share of the world's population has stood at around 33% for the last hundred years, which means that one in three persons on Earth are Christians. This masks a major shift in the demographics of Christianity; large increases in the developing world have been accompanied by substantial declines in the developed world, mainly in Europe and North America. According to a 2015 Pew Research Center study, within the next four decades, Christians will remain the world's largest religion; and by 2050, the Christian population is expected to exceed 3 billion. As a percentage of Christians, the Catholic Church and Orthodoxy (both Eastern and Oriental) are declining in parts of the world (though Catholicism is growing in Asia, in Africa, vibrant in Eastern Europe, etc.), while Protestants and other Christians are on the rise in the developing world. The so-called "popular Protestantism" is one of the fastest growing religious categories in the world. Nevertheless, Catholicism will also continue to grow to 1.63 billion by 2050, according to Todd Johnson of the Center for the Study of Global Christianity. Africa alone, by 2015, will be home to 230 million African Catholics. And if in 2018, the U.N. projects that Africa's population will reach 4.5 billion by 2100 (not 2 billion as predicted in 2004), Catholicism will indeed grow, as will other religious groups. Christianity is the predominant religion in Europe, the Americas, and Southern Africa. In Asia, it is the dominant religion in Georgia, Armenia, East Timor, and the Philippines. However, it is declining in many areas including the Northern and Western United States, Oceania (Australia and New Zealand), northern Europe (including Great Britain, Scandinavia and other places), France, Germany, and the Canadian provinces of Ontario, British Columbia, and Quebec, and parts of Asia (especially the Middle East, due to the Christian emigration, South Korea, Taiwan, and Macau). The Christian population is not decreasing in Brazil, the Southern United States, and the province of Alberta, Canada, but the percentage is decreasing. In countries such as Australia and New Zealand, the Christian population are declining in both numbers and percentage. Despite the declining numbers, Christianity remains the dominant religion in the Western World, where 70% are Christians. A 2011 Pew Research Center survey found that 76% of Europeans, 73% in Oceania and about 86% in the Americas (90% in Latin America and 77% in North America) identified themselves as Christians. By 2010 about 157 countries and territories in the world had Christian majorities. However, there are many charismatic movements that have become well established over large parts of the world, especially Africa, Latin America, and Asia. Since 1900, primarily due to conversion, Protestantism has spread rapidly in Africa, Asia, Oceania, and Latin America. From 1960 to 2000, the global growth of the number of reported Evangelical Protestants grew three times the world's population rate, and twice that of Islam. A study conducted by St. Mary's University estimated about 10.2 million Muslim converts to Christianity in 2015. The results also state that significant numbers of Muslims converts to Christianity in Afghanistan, Albania, Azerbaijan, Algeria, Belgium, France, Germany, Iran, India, Indonesia, Malaysia, Morocco, Russia, the Netherlands, Saudi Arabia, Tunisia, Turkey, Kazakhstan, Kyrgyzstan, Kosovo, the United States, and Central Asia. It is also reported that Christianity is popular among people of different backgrounds in India (mostly Hindus), and Malaysia, Mongolia, Nigeria, Vietnam, Singapore, Indonesia, China, Japan, and South Korea. In most countries in the developed world, church attendance among people who continue to identify themselves as Christians has been falling over the last few decades. Some sources view this simply as part of a drift away from traditional membership institutions, while others link it to signs of a decline in belief in the importance of religion in general. Europe's Christian population, though in decline, still constitutes the largest geographical component of the religion. According to data from the 2012 European Social Survey, around a third of European Christians say they attend services once a month or more, Conversely about more than two-thirds of Latin American Christians; according to the World Values Survey, about 90% of African Christians (in Ghana, Nigeria, Rwand], South Africa and Zimbabwe) said they attended church regularly. Christianity, in one form or another, is the sole state religion of the following nations: Argentina (Catholic), Tuvalu (Reformed), Tonga (Methodist), Norway (Lutheran), Costa Rica (Catholic), the Kingdom of Denmark (Lutheran), England (Anglican), Georgia (Georgian Orthodox), Greece (Greek Orthodox), Iceland (Lutheran), Liechtenstein (Catholic), Malta (Catholic), Monaco (Catholic), and Vatican City (Catholic). There are numerous other countries, such as Cyprus, which although do not have an established church, still give official recognition and support to a specific Christian denomination. The four primary divisions of Christianity are the Catholic Church, the Eastern Orthodox Church, Oriental Orthodoxy, and Protestantism. A broader distinction that is sometimes drawn is between Eastern Christianity and Western Christianity, which has its origins in the East–West Schism (Great Schism) of the 11th century. Recently, neither Western or Eastern World Christianity has also stood out, for example, African-initiated churches. However, there are other present and historical Christian groups that do not fit neatly into one of these primary categories. There is a diversity of doctrines and liturgical practices among groups calling themselves Christian. These groups may vary ecclesiologically in their views on a classification of Christian denominations.
https://en.wikipedia.org/wiki?curid=5211
Computing Computing is any activity that uses computers to manage, process, and communicate information. It includes development of both hardware and software. Computing is a critical, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, and information technology. The ACM "Computing Curricula 2005" defined "computing" as follows: ACM also defines five sub-disciplines of the "computing" field: However, "Computing Curricula 2005" also recognizes that the meaning of "computing" depends on the context: The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on "Computing as a Discipline": The term "computing" is also synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. The history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables. Computing is intimately tied to the representation of numbers. But long before abstractions like "the number" arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence (the basis of counting), comparison to a standard (used for measurement), and the "3-4-5" right triangle (a device for assuring a "right angle"). The earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today. This was the first known calculation aid – preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" then introduced the idea of using electronics for Boolean algebraic operations. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947. In 1953, the University of Manchester built the first transistorized computer, called the Transistor Computer. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The metal–oxide–silicon field-effect transistor (MOSFET, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. The MOSFET made it possible to build high-density integrated circuit chips, leading to what is known as the computer revolution or microcomputer revolution. A computer is a machine that manipulates data according to a set of instructions called a computer program. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm. Because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the CPU type. The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software, or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of "programs, procedures, algorithms" and its "documentation" concerned with the operation of a data processing system. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term "hardware" (meaning physical devices). In contrast to hardware, software is intangible. Software is also sometimes used in a more narrow sense, meaning application software only. Application software, also known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install additional applications. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user. The system software serves the application, which in turn serves the user. Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps such as Microsoft Office are available in versions for several different platforms; others have narrower requirements and are thus called, for example, a Geography application for Windows or an Android application for education or Linux gaming. Sometimes a new and popular application arises that only runs on one platform, increasing the desirability of that platform. This is called a killer application. System software, or systems software, is computer software designed to operate and control the computer hardware, and to provide a platform for running application software. System software includes operating systems, utility software, device drivers, window systems, and firmware. Frequently used development tools such as compilers, linkers, and debuggers are classified as system software. A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information. Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope. Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols include Ethernet, a hardware and Link Layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines. The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (TCP/IP) to serve billions of users that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email. Computer programming in general is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language often more restrictive or demanding than natural languages, but easily translated by the computer. The purpose of programming is to invoke the desired behavior (customization) from the machine. The process of writing high quality source code requires knowledge of both the application's domain "and" the computer science domain. The highest-quality software is thus developed by a team of various domain experts, each person a specialist in some area of development. But the term "programmer" may apply to a range of program quality, from hacker to open source contributor to professional. And a single programmer could do most or all of the computer programming needed to generate the proof of concept to launch a new "killer" application. A programmer, computer programmer, or coder is a person who writes computer software. The term "computer programmer" can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Python, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with "web". The term "programmer" can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming. The computer industry is made up of all of the businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, the manufacture of computer components and the provision of information technology services including system administration and maintenance. The software industry includes businesses engaged in development, maintenance and publication of software. The industry also includes software services, such as training, documentation, and consulting. Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on the design of hardware within its own domain, but as well the interactions between hardware and the world around it. Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. In layman's terms, it is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the perceived "software crisis" at the time. "Software development", a much used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard ISO/IEC TR 19759:2015. Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems. Its subfields can be divided into practical techniques for its implementation and application in computer systems and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to description of computations, while the study of computer programming itself investigates various aspects of the use of programming languages and complex systems, and human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans. "Information systems (IS)" is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data. The ACM's "Computing Careers" website says The study bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. This field studies computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society while IS emphasizes functionality over design. Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise. The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, such as computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce and computer services. A system administrator, IT systems administrator, systems administrator, or sysadmin is a person employed to maintain and operate a computer system or network. The duties of a system administrator are wide-ranging, and may vary substantially from one organization to another. Sysadmins are usually charged with installing, supporting and maintaining servers or other computer systems, and planning for and responding to service outages and other problems. Other duties may include scripting or light programming, project management for systems-related projects, supervising or training computer operators, and being the consultant for computer problems beyond the knowledge of technical support staff. DNA-based computing and quantum computing are areas of active research in both hardware and software (such as the development of quantum algorithms). Potential infrastructure for future technologies includes DNA origami on photolithography and quantum antennae for transferring information between ion traps. By 2011, researchers had entangled 14 qubits. Fast digital circuits (including those based on Josephson junctions and rapid single flux quantum technology) are becoming more nearly realizable with the discovery of nanoscale superconductors. Fiber-optic and photonic (optical) devices, which already have been used to transport data over long distances, have started being used by data centers, side by side with CPU and semiconductor memory components. This allows the separation of RAM from CPU by optical interconnects. IBM has created an integrated circuit with both electronic and optical information processing in one chip. This is denoted "CMOS-integrated nanophotonics" or (CINP). One benefit of optical interconnects is that motherboards which formerly required a certain kind of system on a chip (SoC) can now move formerly dedicated memory and network controllers off the motherboards, spreading the controllers out onto the rack. This allows standardization of backplane interconnects and motherboards for multiple types of SoCs, which allows more timely upgrades of CPUs. Another field of research is spintronics. Spintronics can provide computing power and storage, without heat buildup. Some research is being done on hybrid chips, which combine photonics and spintronics. There is also research ongoing on combining plasmonics, photonics, and electronics. Cloud computing is a model that allows for the use of computing resources, such as servers or applications, without the need for much interaction between the owner of these resources and the user using them. Typically, this is offered as a service making it another example of Software as a Service, Platforms as a Service, and Infrastructure as a Service depending on what functionality is offered. Key characteristics include on-demand access, broad network access, and the capability of rapid scaling. Cloud computing is also being talked about regarding energy conservation. Allowing thousands of instances of computation to occur on one single machine instead of thousands of individual machines could be a way to save energy. It would also ease the act of transitioning to more renewable energy because you would only need to power one server farm with a set of solar panels or wind turbines instead of millions of peoples' homes. However, computing being done from a centralized location has its own challenges. One of the major ones is security. With cloud computing companies having no obligation to tell you what data they have on you, where it's being kept, or how they are using it. Laws in the modern day are not yet equipped to handle these circumstances. In the future, lawmakers in many countries will have to push to regulate cloud computing and protect the privacy of users. Cloud computing is also a way for individual users or small business to benefit from economies of scale. While currently the cloud computing infrastructure is too underdeveloped to benefit the scientific community, within a few years of development it could also be used to help smaller research groups get the computing power they need to answer a lot of the world's questions. Quantum computing is an area of research that brings together the disciplines of computer science, information theory, and quantum physics. The idea of information being a basic part of physics is relatively new, but there seems to be a strong tie between information theory and quantum mechanics. Whereas traditional computing operates on a binary system of ones and zeros, quantum computing uses qubits. Qubits are capable of being in a superposition, which means that they are in both states, one and zero, simultaneously. This means the qubit is not somewhere between 1 and 0, but actually the value of the qubit will change depending on when you measure it. This trait of qubits is called quantum entanglement and is the core idea of quantum computing and is what allows quantum computers to do the large scale equations they are used for. Quantum computing is often used for scientific research where a normal computer does not have nearly enough computational power to do the calculations necessary. A good example would be molecular modeling. Large molecules are far too complex for modern computers to calculate what happens to them during a reaction, but the power of quantum computers could open the doors to further understanding these molecules.
https://en.wikipedia.org/wiki?curid=5213
Casino A casino is a facility for certain types of gambling. Casinos are often built near or combined with hotels, resorts, restaurants, retail shopping, cruise ships, and other tourist attractions. Some casinos are also known for hosting live entertainment, such as stand-up comedy, concerts, and sports. "Casino" is of Italian origin; the root means a house. The term "casino" may mean a small country villa, summerhouse, or social club. During the 19th century, "casino" came to include other public buildings where pleasurable activities took place; such edifices were usually built on the grounds of a larger Italian villa or palazzo, and were used to host civic town functions, including dancing, gambling, music listening, and sports. Examples in Italy include Villa Farnese and Villa Giulia, and in the US the Newport Casino in Newport, Rhode Island. In modern-day Italian, a is a brothel (also called , literally "closed house"), a mess (confusing situation), or a noisy environment; a gaming house is spelt , with an accent. Not all casinos are used for gaming. The Catalina Casino, on Santa Catalina Island, California, has never been used for traditional games of chance, which were already outlawed in California by the time it was built. The Copenhagen Casino was a Danish theatre which also held public meetings during the 1848 Revolution, which made Denmark a constitutional monarchy. In military and non-military usage, a (Spanish) or (German) is an officers' mess. The precise origin of gambling is unknown. It is generally believed that gambling in some form or another has been seen in almost every society in history. From the Ancient Greeks and Romans to Napoleon's France and Elizabethan England, much of history is filled with stories of entertainment based on games of chance. The first known European gambling house, not called a casino although meeting the modern definition, was the Ridotto, established in Venice, Italy, in 1638 by the Great Council of Venice to provide controlled gambling during the carnival season. It was closed in 1774 as the city government felt it was impoverishing the local gentry. In American history, early gambling establishments were known as saloons. The creation and importance of saloons was greatly influenced by four major cities: New Orleans, St. Louis, Chicago and San Francisco. It was in the saloons that travelers could find people to talk to, drink with, and often gamble with. During the early 20th century in America, gambling was outlawed by state legislation. However, in 1931, gambling was legalized throughout the state of Nevada, where America's first legalized casinos were set up. In 1976 New Jersey allowed gambling in Atlantic City, now America's second largest gambling city. Most jurisdictions worldwide have a minimum gambling age of 18 to 21. Customers gamble by playing games of chance, in some cases with an element of skill, such as craps, roulette, baccarat, blackjack, and video poker. Most games have mathematically determined odds that ensure the house has at all times an advantage over the players. This can be expressed more precisely by the notion of expected value, which is uniformly negative (from the player's perspective). This advantage is called the "house edge". In games such as poker where players play against each other, the house takes a commission called the rake. Casinos sometimes give out complimentary items or comps to gamblers. "Payout" is the percentage of funds ("winnings") returned to players. Casinos in the United States say that a player staking money won from the casino is "playing with the house's money". Video Lottery Machines (slot machines) have become one of the most popular forms of gambling in casinos. investigative reports have started calling into question whether the modern-day slot-machine is addictive. Casino design—regarded as a psychological exercise—is an intricate process that involves optimising floor plan, décor and atmospherics to encourage gambling. Factors influencing gambling tendencies include sound, odour and lighting. Natasha Dow Schüll, an anthropologist at the Massachusetts Institute of Technology, highlights the decision of the audio directors at Silicon Gaming to make its slot machines resonate in "the universally pleasant tone of C, sampling existing casino soundscapes to create a sound that would please but not clash". Alan Hirsch, founder of the Smell & Taste Treatment and Research Foundation in Chicago, studied the impact of certain scents on gamblers, discerning that a pleasant albeit unidentifiable odor released by Las Vegas slot machines generated about 50% more in daily revenue. He suggested that the scent acted as an aphrodisiac, causing a more aggressive form of gambling. The following lists major casino markets in the world with casino revenue of over US$1 billion as published in PricewaterhouseCoopers's report on the outlook for the global casino market: According to Bloomberg, accumulated revenue of the biggest casino operator companies worldwide amounted to almost US$55 billion in 2011. SJM Holdings Ltd. was the leading company in this field, earning $9.7 bn in 2011, followed by Las Vegas Sands Corp. at $7.4 bn. The third-biggest casino operator company (based on revenue) was Caesars Entertainment, with revenue of US$6.2 bn. While there are casinos in many places, a few places have become well known specifically for gambling. Perhaps the place almost defined by its casino is Monte Carlo, but other places are known as gambling centers. Monte Carlo Casino, located in Monte Carlo city, in Monaco, is a casino and a tourist attraction. Monte Carlo Casino has been depicted in many books, including Ben Mezrich's "Busting Vegas", where a group of Massachusetts Institute of Technology students beat the casino out of nearly $1 million. This book is based on real people and events; however, many of those events are contested by main character Semyon Dukach. Monte Carlo Casino has also been featured in multiple James Bond novels and films. The casino is mentioned in the song "The Man Who Broke the Bank at Monte Carlo" as well as the film of the same name. Casinò di Campione is located in the tiny Italian enclave of Campione d'Italia, within Ticino, Switzerland. The casino was founded in 1917 as a site to gather information from foreign diplomats during the First World War. Today it is owned by the Italian government, and operated by the municipality. With gambling laws being less strict than in Italy and Switzerland, it is among most popular gambling destination besides Monte Carlo. The income from the casino is sufficient for the operation of Campione without the imposition of taxes, or obtaining of other revenue. In 2007, the casino moved into new premises of more than , making it the largest casino in Europe. The new casino was built alongside the old one, which dated from 1933 and has since been demolished. The former Portuguese colony of Macau, a special administrative region of the People's Republic of China since 1999, is a popular destination for visitors who wish to gamble. This started in Portuguese times, when Macau was popular with visitors from nearby British Hong Kong, where gambling was more closely regulated. The Venetian Macao is currently the largest casino in the world. Macau also surpassed Las Vegas as the largest gambling market in the world. Machine-based gaming is only permitted in land-based casinos, restaurants, bars and gaming halls, and only subject to a licence. Online slots are, at the moment, only permitted if they are operated under a Schleswig-Holstein licence. AWPs are governed by federal law – the Trade Regulation Act and the Gaming Ordinance. The Casino Estoril, located in the municipality of Cascais, on the Portuguese Riviera, near Lisbon, is the largest casino in Europe by capacity. During the Second World War, it was reputed to be a gathering point for spies, dispossessed royals, and wartime adventurers; it became an inspiration for Ian Fleming's James Bond 007 novel "Casino Royale". Singapore is an up-and-coming destination for visitors wanting to gamble, although there are currently only two casinos (both foreign owned), in Singapore. The Marina Bay Sands is the most expensive standalone casino in the world, at a price of US$8 billion, and is among the world's ten most expensive buildings. The Resorts World Sentosa has the world's largest oceanarium. With currently over 1,000 casinos, the United States has the largest number of casinos in the world. The number continues to grow steadily as more states seek to legalize casinos. 40 states now have some form of casino gambling. Relatively small places such as Las Vegas are best known for gambling; larger cities such as Chicago are not defined by their casinos in spite of the large turnover. The Las Vegas Valley has the largest concentration of casinos in the United States. Based on revenue, Atlantic City, New Jersey ranks second, and the Chicago region third. Top American casino markets by revenue (2015 annual revenues): The Nevada Gaming Control Board divides Clark County, which is coextensive with the Las Vegas metropolitan area, into seven market regions for reporting purposes. Native American gaming has been responsible for a rise in the number of casinos outside of Las Vegas and Atlantic City. Given the large amounts of currency handled within a casino, both patrons and staff may be tempted to cheat and steal, in collusion or independently; most casinos have security measures to prevent this. Security cameras located throughout the casino are the most basic measure. Modern casino security is usually divided between a physical security force and a specialized surveillance department. The physical security force usually patrols the casino and responds to calls for assistance and reports of suspicious or definite criminal activity. A specialized surveillance department operates the casino's closed circuit television system, known in the industry as the eye in the sky. Both of these specialized casino security departments work very closely with each other to ensure the safety of both guests and the casino's assets, and have been quite successful in preventing crime. Some casinos also have catwalks in the ceiling above the casino floor, which allow surveillance personnel to look directly down, through one way glass, on the activities at the tables and slot machines. When it opened in 1989, The Mirage was the first casino to use cameras full-time on all table games. In addition to cameras and other technological measures, casinos also enforce security through rules of conduct and behavior; for example, players at card games are required to keep the cards they are holding in their hands visible at all times. Over the past few decades, casinos have developed many different marketing techniques for attracting and maintaining loyal patrons. Many casinos use a loyalty rewards program used to track players' spending habits and target their patrons more effectively, by sending mailings with free slot play and other promotions. Casinos have been linked to organised crime, with early casinos in Las Vegas originally dominated by the American Mafia and in Macau by Triad syndicates. According to some police reports, local incidence of reported crime often doubles or triples within three years of a casino's opening. In a 2004 report by the US Department of Justice, researchers interviewed people who had been arrested in Las Vegas and Des Moines and found that the percentage of problem or pathological gamblers among the arrestees was three to five times higher than in the general population. It has been said that economic studies showing a positive relationship between casinos and crime usually fail to consider the visiting population: they count crimes committed by visitors but do not count visitors in the population measure, which overstates the crime rate. Part of the reason this methodology is used, despite the overstatement, is that reliable data on tourist count are often not available. There are unique occupational health issues in the casino industry. The most common are from cancers resulting from exposure to second-hand tobacco smoke and musculoskeletal injury (MSI) from repetitive motion injuries while running table games over many hours.
https://en.wikipedia.org/wiki?curid=5215
Major depressive disorder Major depressive disorder (MDD), also known simply as depression, is a mental disorder characterized by at least two weeks of low mood that is present across most situations. It is often accompanied by low self-esteem, loss of interest in normally enjoyable activities, low energy, and pain without a clear cause. Those affected may also occasionally have false beliefs or see or hear things that others cannot. Some people have periods of depression separated by years in which they are normal, while others nearly always have symptoms present. Major depressive disorder can negatively affect a person's personal life, work life, or education as well as sleeping, eating habits, and general health. About 2–8% of adults with major depression die by suicide, and about 50% of people who die by suicide had depression or another mood disorder. The cause is believed to be a combination of genetic, environmental, and psychological factors. Risk factors include a family history of the condition, major life changes, certain medications, chronic health problems, and substance abuse. About 40% of the risk appears to be related to genetics. The diagnosis of major depressive disorder is based on the person's reported experiences and a mental status examination. There is no laboratory test for the disorder. Testing, however, may be done to rule out physical conditions that can cause similar symptoms. Major depression is more severe and lasts longer than sadness, which is a normal part of life. Since 2016, the United States Preventive Services Task Force (USPSTF) has recommended screening for depression among those over the age 12, while a 2005 Cochrane review found that the routine use of screening questionnaires has little effect on detection or treatment. Those with major depressive disorder are typically treated with counseling and antidepressant medication. Medication appears to be effective, but the effect may only be significant in the most severely depressed. It is unclear whether medications affect the risk of suicide. Types of counseling used include cognitive behavioral therapy (CBT) and interpersonal therapy. If other measures are not effective, electroconvulsive therapy (ECT) may be considered. Hospitalization may be necessary in cases with a risk of harm to self and may occasionally occur against a person's wishes. Major depressive disorder affected approximately 163 million people (2% of the world's population) in 2017. The percentage of people who are affected at one point in their life varies from 7% in Japan to 21% in France. Lifetime rates are higher in the developed world (15%) compared to the developing world (11%). The disorder causes the second-most years lived with disability, after lower back pain. The most common time of onset is in a person's 20s and 30s. Females are affected about twice as often as males. The American Psychiatric Association added "major depressive disorder" to the "Diagnostic and Statistical Manual of Mental Disorders" (DSM-III) in 1980. It was a split of the previous depressive neurosis in the DSM-II, which also encompassed the conditions now known as dysthymia and adjustment disorder with depressed mood. Those currently or previously affected may be stigmatized. Major depression significantly affects a person's family and personal relationships, work or school life, sleeping and eating habits, and general health. Its impact on functioning and well-being has been compared to that of other chronic medical conditions, such as diabetes. A person having a major depressive episode usually exhibits a very low mood, which pervades all aspects of life, and an inability to experience pleasure in activities that were formerly enjoyed. Depressed people may be preoccupied with, or ruminate over, thoughts and feelings of worthlessness, inappropriate guilt or regret, helplessness, hopelessness, and self-hatred. In severe cases, depressed people may have symptoms of psychosis. These symptoms include delusions or, less commonly, hallucinations, usually unpleasant. Other symptoms of depression include poor concentration and memory (especially in those with melancholic or psychotic features), withdrawal from social situations and activities, reduced sex drive, irritability, and thoughts of death or suicide. Insomnia is common among the depressed. In the typical pattern, a person wakes very early and cannot get back to sleep. Hypersomnia, or oversleeping, can also happen. Some antidepressants may also cause insomnia due to their stimulating effect. A depressed person may report multiple physical symptoms such as fatigue, headaches, or digestive problems; physical complaints are the most common presenting problem in developing countries, according to the World Health Organization's criteria for depression. Appetite often decreases, with resulting weight loss, although increased appetite and weight gain occasionally occur. Family and friends may notice that the person's behavior is either agitated or lethargic. Older depressed people may have cognitive symptoms of recent onset, such as forgetfulness, and a more noticeable slowing of movements. Depression often coexists with physical disorders common among the elderly, such as stroke, other cardiovascular diseases, Parkinson's disease, and chronic obstructive pulmonary disease. Depressed children may often display an irritable mood rather than a depressed one, and show varying symptoms depending on age and situation. Most lose interest in school and show a decline in academic performance. They may be described as clingy, demanding, dependent, or insecure. Diagnosis may be delayed or missed when symptoms are interpreted as "normal moodiness." Major depression frequently co-occurs with other psychiatric problems. The 1990–92 "National Comorbidity Survey" (US) reports that half of those with major depression also have lifetime anxiety and its associated disorders such as generalized anxiety disorder. Anxiety symptoms can have a major impact on the course of a depressive illness, with delayed recovery, increased risk of relapse, greater disability and increased suicide attempts. There are increased rates of alcohol and drug abuse and particularly dependence, and around a third of individuals diagnosed with ADHD develop comorbid depression. Post-traumatic stress disorder and depression often co-occur. Depression may also coexist with attention deficit hyperactivity disorder (ADHD), complicating the diagnosis and treatment of both. Depression is also frequently comorbid with alcohol abuse and personality disorders. Depression can also be exacerbated during particular months (usually winter) for those with seasonal affective disorder. While overuse of digital media has been associated with depressive symptoms, digital media may also be utilised in some situations to improve mood. Depression and pain often co-occur. One or more pain symptoms are present in 65% of depressed patients, and anywhere from 5 to 85% of patients with pain will be suffering from depression, depending on the setting; there is a lower prevalence in general practice, and higher in specialty clinics. The diagnosis of depression is often delayed or missed, and the outcome can worsen if the depression is noticed but completely misunderstood. Depression is also associated with a 1.5- to 2-fold increased risk of cardiovascular disease, independent of other known risk factors, and is itself linked directly or indirectly to risk factors such as smoking and obesity. People with major depression are less likely to follow medical recommendations for treating and preventing cardiovascular disorders, which further increases their risk of medical complications. In addition, cardiologists may not recognize underlying depression that complicates a cardiovascular problem under their care. The cause of major depressive disorder is unknown. The biopsychosocial model proposes that biological, psychological, and social factors all play a role in causing depression. The diathesis–stress model specifies that depression results when a preexisting vulnerability, or diathesis, is activated by stressful life events. The preexisting vulnerability can be either genetic, implying an interaction between nature and nurture, or schematic, resulting from views of the world learned in childhood. Childhood abuse, either physical, sexual or psychological, are all risk factors for depression, among other psychiatric issues that co-occur such as anxiety and drug abuse. Childhood trauma also correlates with severity of depression, lack of response to treatment and length of illness. However, some are more susceptible to developing mental illness such as depression after trauma, and various genes have been suggested to control susceptibility. Family and twin studies find that nearly 40% of individual differences in risk for major depressive disorder can be explained by genetic factors. Like most psychiatric disorders, major depressive disorder is likely to be influenced by many individual genetic changes. In 2018, a genome-wide association study discovered 44 variants in the genome linked to risk for major depression. This was followed by a 2019 study that found 102 variants in the genome linked to depression. The 5-HTTLPR, or serotonin transporter promoter gene's short allele has been associated with increased risk of depression. However, since the 1990s, results have been inconsistent, with three recent reviews finding an effect and two finding none. Other genes that have been linked to a gene-environment interaction include CRHR1, FKBP5 and BDNF, the first two of which are related to the stress reaction of the HPA axis, and the latter of which is involved in neurogenesis. There is no conclusive effects of candidate gene on depression, either alone or in combination with life stress. Research focusing on specific candidate genes has been criticized for its tendency to generate false positive findings. There are also other efforts to examine interactions between life stress and polygenic risk for depression. Depression may also come secondary to a chronic or terminal medical condition, such as HIV/AIDS or asthma, and may be labeled "secondary depression." It is unknown whether the underlying diseases induce depression through effect on quality of life, of through shared etiologies (such as degeneration of the basal ganglia in Parkinson's disease or immune dysregulation in asthma). Depression may also be iatrogenic (the result of healthcare), such as drug-induced depression. Therapies associated with depression include interferons, beta-blockers, isotretinoin, contraceptives, cardiac agents, anticonvulsants, antimigraine drugs, antipsychotics, and hormonal agents such as gonadotropin-releasing hormone agonist. Drug abuse in early age is also associated with increased risk of developing depression later in life. Depression that occurs as a result of pregnancy is called postpartum depression, and is thought to be the result of hormonal changes associated with pregnancy. Seasonal affective disorder, a type of depression associated with seasonal changes in sunlight, is thought to be the result of decreased sunlight. The pathophysiology of depression is not yet understood, but the current theories center around monoaminergic systems, the circadian rhythm, immunological dysfunction, HPA axis dysfunction and structural or functional abnormalities of emotional circuits. The monoamine theory, derived from the efficacy of monoaminergic drugs in treating depression, was the dominant theory until recently . The theory postulates that insufficient activity of monoamine neurotransmitters is the primary cause of depression. Evidence for the monoamine theory comes from multiple areas. Firstly, acute depletion of tryptophan, a necessary precursor of serotonin, a monoamine, can cause depression in those in remission or relatives of depressed patients; this suggests that decreased serotonergic neurotransmission is important in depression. Secondly, the correlation between depression risk and polymorphisms in the 5-HTTLPR gene, which codes for serotonin receptors, suggests a link. Third, decreased size of the locus coeruleus, decreased activity of tyrosine hydroxylase, increased density of alpha-2 adrenergic receptor, and evidence from rat models suggest decreased adrenergic neurotransmission in depression. Furthermore, decreased levels of homovanillic acid, altered response to dextroamphetamine, responses of depressive symptoms to dopamine receptor agonists, decreased dopamine receptor D1 binding in the striatum, and polymorphism of dopamine receptor genes implicate dopamine, another monoamine, in depression. Lastly, increased activity of monoamine oxidase, which degrades monoamines, has been associated with depression. However, this theory is inconsistent with the fact that serotonin depletion does not cause depression in healthy persons, the fact that antidepressants instantly increase levels of monoamines but take weeks to work, and the existence of atypical antidepressants which can be effective despite not targeting this pathway. One proposed explanation for the therapeutic lag, and further support for the deficiency of monoamines, is a desensitization of self-inhibition in raphe nuclei by the increased serotonin mediated by antidepressants. However, disinhibition of the dorsal raphe has been proposed to occur as a result of "decreased" serotonergic activity in tryptophan depletion, resulting in a depressed state mediated by increased serotonin. Further countering the monoamine hypothesis is the fact that rats with lesions of the dorsal raphe are not more depressive than controls, the finding of increased jugular 5-HIAA in depressed patients that normalized with SSRI treatment, and the preference for carbohydrates in depressed patients. Already limited, the monoamine hypothesis has been further oversimplified when presented to the general public. Immune system abnormalities have been observed, including increased levels of cytokines involved in generating sickness behavior (which shares overlap with depression). The effectiveness of nonsteroidal anti-inflammatory drugs (NSAIDs) and cytokine inhibitors in treating depression, and normalization of cytokine levels after successful treatment further suggest immune system abnormalities in depression. HPA axis abnormalities have been suggested in depression given the association of CRHR1 with depression and the increased frequency of dexamethasone test non-suppression in depressed patients. However, this abnormality is not adequate as a diagnosis tool, because its sensitivity is only 44%. These stress-related abnormalities have been hypothesized to be the cause of hippocampal volume reductions seen in depressed patients. Furthermore, a meta-analysis yielded decreased dexamethasone suppression, and increased response to psychological stressors. Further abnormal results have been obscured with the cortisol awakening response, with increased response being associated with depression. Theories unifying neuroimaging findings have been proposed. The first model proposed is the "Limbic Cortical Model", which involves hyperactivity of the ventral paralimbic regions and hypoactivity of frontal regulatory regions in emotional processing. Another model, the "Corito-Striatal model", suggests that abnormalities of the prefrontal cortex in regulating striatal and subcortical structures results in depression. Another model proposes hyperactivity of salience structures in identifying negative stimuli, and hypoactivity of cortical regulatory structures resulting in a negative emotional bias and depression, consistent with emotional bias studies. A diagnostic assessment may be conducted by a suitably trained general practitioner, or by a psychiatrist or psychologist, who records the person's current circumstances, biographical history, current symptoms, and family history. The broad clinical aim is to formulate the relevant biological, psychological, and social factors that may be impacting on the individual's mood. The assessor may also discuss the person's current ways of regulating mood (healthy or otherwise) such as alcohol and drug use. The assessment also includes a mental state examination, which is an assessment of the person's current mood and thought content, in particular the presence of themes of hopelessness or pessimism, self-harm or suicide, and an absence of positive thoughts or plans. Specialist mental health services are rare in rural areas, and thus diagnosis and management is left largely to primary-care clinicians. This issue is even more marked in developing countries. The mental health examination may include the use of a rating scale such as the Hamilton Rating Scale for Depression, the Beck Depression Inventory or the Suicide Behaviors Questionnaire-Revised. The score on a rating scale alone is insufficient to diagnose depression to the satisfaction of the DSM or ICD, but it provides an indication of the severity of symptoms for a time period, so a person who scores above a given cut-off point can be more thoroughly evaluated for a depressive disorder diagnosis. Several rating scales are used for this purpose. Primary-care physicians and other non-psychiatrist physicians have more difficulty with underrecognition and undertreatment of depression compared to psychiatric physicians, in part because of the physical symptoms that often accompany depression, in addition to many potential patient, provider, and system barriers. A review found that non-psychiatrist physicians miss about two-thirds of cases, though this has improved somewhat in more recent studies. Before diagnosing a major depressive disorder, in general a doctor performs a medical examination and selected investigations to rule out other causes of symptoms. These include blood tests measuring TSH and thyroxine to exclude hypothyroidism; basic electrolytes and serum calcium to rule out a metabolic disturbance; and a full blood count including ESR to rule out a systemic infection or chronic disease. Adverse affective reactions to medications or alcohol misuse are often ruled out, as well. Testosterone levels may be evaluated to diagnose hypogonadism, a cause of depression in men. Vitamin D levels might be evaluated, as low levels of vitamin D have been associated with greater risk for depression. Subjective cognitive complaints appear in older depressed people, but they can also be indicative of the onset of a dementing disorder, such as Alzheimer's disease. Cognitive testing and brain imaging can help distinguish depression from dementia. A CT scan can exclude brain pathology in those with psychotic, rapid-onset or otherwise unusual symptoms. In general, investigations are not repeated for a subsequent episode unless there is a medical indication. No biological tests confirm major depression. Biomarkers of depression have been sought to provide an objective method of diagnosis. There are several potential biomarkers, including brain-derived neurotrophic factor and various functional MRI (fMRI) techniques. One study developed a decision tree model of interpreting a series of fMRI scans taken during various activities. In their subjects, the authors of that study were able to achieve a sensitivity of 80% and a specificity of 87%, corresponding to a negative predictive value of 98% and a positive predictive value of 32% (positive and negative likelihood ratios were 6.15, 0.23, respectively). However, much more research is needed before these tests can be used clinically. The most widely used criteria for diagnosing depressive conditions are found in the American Psychiatric Association's "Diagnostic and Statistical Manual of Mental Disorders" and the World Health Organization's "International Statistical Classification of Diseases and Related Health Problems" which uses the name "depressive episode" for a single episode and "recurrent depressive disorder" for repeated episodes. The latter system is typically used in European countries, while the former is used in the US and many other non-European nations, and the authors of both have worked towards conforming one with the other. Both DSM-5 and ICD-10 mark out typical (main) depressive symptoms. ICD-10 defines three typical depressive symptoms (depressed mood, anhedonia, and reduced energy), two of which should be present to determine the depressive disorder diagnosis. According to DSM-5, there are two main depressive symptoms- a depressed mood and loss of interest/pleasure in activities (anhedonia). These symptoms, as well as five out of the nine more specific symptoms listed, must frequently occur for more than two weeks (to the extent in which it impairs functioning) for the diagnosis. Major depressive disorder is classified as a mood disorder in DSM-5. The diagnosis hinges on the presence of single or recurrent major depressive episodes. Further qualifiers are used to classify both the episode itself and the course of the disorder. The category Unspecified Depressive Disorder is diagnosed if the depressive episode's manifestation does not meet the criteria for a major depressive episode. The ICD-10 system does not use the term "major depressive disorder" but lists very similar criteria for the diagnosis of a depressive episode (mild, moderate or severe); the term "recurrent" may be added if there have been multiple episodes without mania. A major depressive episode is characterized by the presence of a severely depressed mood that persists for at least two weeks. Episodes may be isolated or recurrent and are categorized as mild (few symptoms in excess of minimum criteria), moderate, or severe (marked impact on social or occupational functioning). An episode with psychotic features—commonly referred to as "psychotic depression"—is automatically rated as severe. If the patient has had an episode of mania or markedly elevated mood, a diagnosis of bipolar disorder is made instead. Depression without mania is sometimes referred to as "unipolar" because the mood remains at one emotional state or "pole". DSM-IV-TR excludes cases where the symptoms are a result of bereavement, although it is possible for normal bereavement to evolve into a depressive episode if the mood persists and the characteristic features of a major depressive episode develop. The criteria were criticized because they do not take into account any other aspects of the personal and social context in which depression can occur. In addition, some studies have found little empirical support for the DSM-IV cut-off criteria, indicating they are a diagnostic convention imposed on a continuum of depressive symptoms of varying severity and duration. Bereavement is no longer an exclusion criterion in DSM-5, and it is now up to the clinician to distinguish between normal reactions to a loss and MDD. Excluded are a range of related diagnoses, including dysthymia, which involves a chronic but milder mood disturbance; recurrent brief depression, consisting of briefer depressive episodes; minor depressive disorder, whereby only some symptoms of major depression are present; and adjustment disorder with depressed mood, which denotes low mood resulting from a psychological response to an identifiable event or stressor. Three new depressive disorders were added to the DSM-5: disruptive mood dysregulation disorder, classified by significant childhood irritability and tantrums, premenstrual dysphoric disorder (PMDD), causing periods of anxiety, depression, or irritability in the week or two before a woman's menstruation, and persistent depressive disorder. The DSM-5 recognizes six further subtypes of MDD, called "specifiers", in addition to noting the length, severity and presence of psychotic features: In 2016, the United States Preventive Services Task Force (USPSTF) recommended screening in the adult populations with evidence that it increases the detection of people with depression and with proper treatment improves outcomes. They recommend screening in those between the age of 12 to 18 as well. A Cochrane review from 2005 found screening programs do not significantly improve detection rates, treatment, or outcome. To confirm major depressive disorder as the most likely diagnosis, other potential diagnoses must be considered, including dysthymia, adjustment disorder with depressed mood, or bipolar disorder. Dysthymia is a chronic, milder mood disturbance in which a person reports a low mood almost daily over a span of at least two years. The symptoms are not as severe as those for major depression, although people with dysthymia are vulnerable to secondary episodes of major depression (sometimes referred to as "double depression"). Adjustment disorder with depressed mood is a mood disturbance appearing as a psychological response to an identifiable event or stressor, in which the resulting emotional or behavioral symptoms are significant but do not meet the criteria for a major depressive episode. Bipolar disorder, also known as "manic–depressive disorder", is a condition in which depressive phases alternate with periods of mania or hypomania. Although depression is currently categorized as a separate disorder, there is ongoing debate because individuals diagnosed with major depression often experience some hypomanic symptoms, indicating a mood disorder continuum. Further differential diagnoses involve chronic fatigue syndrome. Other disorders need to be ruled out before diagnosing major depressive disorder. They include depressions due to physical illness, medications, and substance abuse. Depression due to physical illness is diagnosed as a mood disorder due to a general medical condition. This condition is determined based on history, laboratory findings, or physical examination. When the depression is caused by a medication, drug of abuse, or exposure to a toxin, it is then diagnosed as a specific mood disorder (previously called "substance-induced mood disorder" in the DSM-IV-TR). Preventive efforts may result in decreases in rates of the condition of between 22 and 38%. Eating large amounts of fish may also reduce the risk. Behavioral interventions, such as interpersonal therapy and cognitive-behavioral therapy, are effective at preventing new onset depression. Because such interventions appear to be most effective when delivered to individuals or small groups, it has been suggested that they may be able to reach their large target audience most efficiently through the Internet. However, an earlier meta-analysis found preventive programs with a competence-enhancing component to be superior to behavior-oriented programs overall, and found behavioral programs to be particularly unhelpful for older people, for whom social support programs were uniquely beneficial. In addition, the programs that best prevented depression comprised more than eight sessions, each lasting between 60 and 90 minutes, were provided by a combination of lay and professional workers, had a high-quality research design, reported attrition rates, and had a well-defined intervention. The Netherlands mental health care system provides preventive interventions, such as the "Coping with Depression" course (CWD) for people with sub-threshold depression. The course is claimed to be the most successful of psychoeducational interventions for the treatment and prevention of depression (both for its adaptability to various populations and its results), with a risk reduction of 38% in major depression and an efficacy as a treatment comparing favorably to other psychotherapies. The three most common treatments for depression are psychotherapy, medication, and electroconvulsive therapy. Psychotherapy is the treatment of choice (over medication) for people under 18. The UK National Institute for Health and Care Excellence (NICE) 2004 guidelines indicate that antidepressants should not be used for the initial treatment of mild depression because the risk-benefit ratio is poor. The guidelines recommend that antidepressants treatment in combination with psychosocial interventions should be considered for: The guidelines further note that antidepressant treatment should be continued for at least six months to reduce the risk of relapse, and that SSRIs are better tolerated than tricyclic antidepressants. American Psychiatric Association treatment guidelines recommend that initial treatment should be individually tailored based on factors including severity of symptoms, co-existing disorders, prior treatment experience, and patient preference. Options may include pharmacotherapy, psychotherapy, exercise, electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS) or light therapy. Antidepressant medication is recommended as an initial treatment choice in people with mild, moderate, or severe major depression, and should be given to all patients with severe depression unless ECT is planned. There is evidence that collaborative care by a team of health care practitioners produces better results than routine single-practitioner care. Treatment options are much more limited in developing countries, where access to mental health staff, medication, and psychotherapy is often difficult. Development of mental health services is minimal in many countries; depression is viewed as a phenomenon of the developed world despite evidence to the contrary, and not as an inherently life-threatening condition. A 2014 Cochrane review found insufficient evidence to determine the effectiveness of psychological versus medical therapy in children. Physical exercise is recommended for management of mild depression, and has a moderate effect on symptoms. Exercise has also been found to be effective for (unipolar) major depression. It is equivalent to the use of medications or psychological therapies in most people. In older people it does appear to decrease depression. Exercise may be recommended to people who are willing, motivated, and physically healthy enough to participate in an exercise program as treatment. There is a small amount of evidence that skipping a night's sleep may improve depressive symptoms, with the effects usually showing up within a day. This effect is usually temporary. Besides sleepiness, this method can cause a side effect of mania or hypomania. In observational studies, smoking cessation has benefits in depression as large as or larger than those of medications. Besides exercise, sleep and diet may play a role in depression, and interventions in these areas may be an effective add-on to conventional methods. Talking therapy (psychotherapy) can be delivered to individuals, groups, or families by mental health professionals. A 2017 review found that cognitive behavioral therapy appears to be similar to antidepressant medication in terms of effect. A 2012 review found psychotherapy to be better than no treatment but not other treatments. With more complex and chronic forms of depression, a combination of medication and psychotherapy may be used. A 2014 Cochrane review found that work-directed interventions combined with clinical interventions helped to reduce sick days taken by people with depression. There is moderate-quality evidence that psychological therapies are a useful addition to standard antidepressant treatment of treatment-resistant depression in the short term. Psychotherapy has been shown to be effective in older people. Successful psychotherapy appears to reduce the recurrence of depression even after it has been stopped or replaced by occasional booster sessions. Cognitive behavioral therapy (CBT) currently has the most research evidence for the treatment of depression in children and adolescents, and CBT and interpersonal psychotherapy (IPT) are preferred therapies for adolescent depression. In people under 18, according to the National Institute for Health and Clinical Excellence, medication should be offered only in conjunction with a psychological therapy, such as CBT, interpersonal therapy, or family therapy. Cognitive behavioral therapy has also been shown to reduce the number of sick days taken by people with depression, when used in conjunction with primary care. The most-studied form of psychotherapy for depression is CBT, which teaches clients to challenge self-defeating, but enduring ways of thinking (cognitions) and change counter-productive behaviors. Research beginning in the mid-1990s suggested that CBT could perform as well as or better than antidepressants in patients with moderate to severe depression. CBT may be effective in depressed adolescents, although its effects on severe episodes are not definitively known. Several variables predict success for cognitive behavioral therapy in adolescents: higher levels of rational thoughts, less hopelessness, fewer negative thoughts, and fewer cognitive distortions. CBT is particularly beneficial in preventing relapse. Cognitive behavioral therapy and occupational programs (including modification of work activities and assistance) have been shown to be effective in reducing sick days taken by workers with depression. Several variants of cognitive behavior therapy have been used in those with depression, the most notable being rational emotive behavior therapy, and mindfulness-based cognitive therapy. Mindfulness-based stress reduction programs may reduce depression symptoms. Mindfulness programs also appear to be a promising intervention in youth. Psychoanalysis is a school of thought, founded by Sigmund Freud, which emphasizes the resolution of unconscious mental conflicts. Psychoanalytic techniques are used by some practitioners to treat clients presenting with major depression. A more widely practiced therapy, called psychodynamic psychotherapy, is in the tradition of psychoanalysis but less intensive, meeting once or twice a week. It also tends to focus more on the person's immediate problems, and has an additional social and interpersonal focus. In a meta-analysis of three controlled trials of Short Psychodynamic Supportive Psychotherapy, this modification was found to be as effective as medication for mild to moderate depression. Conflicting results have arisen from studies that look at the effectiveness of antidepressants in people with acute, mild to moderate depression. Stronger evidence supports the usefulness of antidepressants in the treatment of depression that is chronic (dysthymia) or severe. While small benefits were found, researchers Irving Kirsch and Thomas Moore state they may be due to issues with the trials rather than a true effect of the medication. In a later publication, Kirsch concluded that the overall effect of new-generation antidepressant medication is below recommended criteria for clinical significance. Similar results were obtained in a meta-analysis by Fornier. A review commissioned by the National Institute for Health and Care Excellence (UK) concluded that there is strong evidence that selective serotonin reuptake inhibitors (SSRIs), such as escitalopram, paroxetine, and sertraline, have greater efficacy than placebo on achieving a 50% reduction in depression scores in moderate and severe major depression, and that there is some evidence for a similar effect in mild depression. Similarly, a Cochrane systematic review of clinical trials of the generic tricyclic antidepressant amitriptyline concluded that there is strong evidence that its efficacy is superior to placebo. A 2019 Cochrane review on the combined use of antidepressants plus benzodiazepines demonstrated improved effectiveness when compared to antidepressants alone; however, these effects were not maintained in the acute or continuous phase. The benefits of adding a benzodiazepine should be balanced against possible harms and other alternative treatment strategies when antidepressant mono-therapy is considered inadequate. In 2014 the U.S. Food and Drug Administration published a systematic review of all antidepressant maintenance trials submitted to the agency between 1985 and 2012. The authors concluded that maintenance treatment reduced the risk of relapse by 52% compared to placebo, and that this effect was primarily due to recurrent depression in the placebo group rather than a drug withdrawal effect. To find the most effective antidepressant medication with minimal side-effects, the dosages can be adjusted, and if necessary, combinations of different classes of antidepressants can be tried. Response rates to the first antidepressant administered range from 50–75%, and it can take at least six to eight weeks from the start of medication to improvement. Antidepressant medication treatment is usually continued for 16 to 20 weeks after remission, to minimize the chance of recurrence, and even up to one year of continuation is recommended. People with chronic depression may need to take medication indefinitely to avoid relapse. SSRIs are the primary medications prescribed, owing to their relatively mild side-effects, and because they are less toxic in overdose than other antidepressants. People who do not respond to one SSRI can be switched to another antidepressant, and this results in improvement in almost 50% of cases. Another option is to switch to the atypical antidepressant bupropion. Venlafaxine, an antidepressant with a different mechanism of action, may be modestly more effective than SSRIs. However, venlafaxine is not recommended in the UK as a first-line treatment because of evidence suggesting its risks may outweigh benefits, and it is specifically discouraged in children and adolescents. For children, some research has supported the use of the SSRI antidepressant fluoxetine. The benefit however appears to be slight in children, while other antidepressants have not been shown to be effective. Medications are not recommended in children with mild disease. There is also insufficient evidence to determine effectiveness in those with depression complicated by dementia. Any antidepressant can cause low blood sodium levels; nevertheless, it has been reported more often with SSRIs. It is not uncommon for SSRIs to cause or worsen insomnia; the sedating atypical antidepressant mirtazapine can be used in such cases. Irreversible monoamine oxidase inhibitors, an older class of antidepressants, have been plagued by potentially life-threatening dietary and drug interactions. They are still used only rarely, although newer and better-tolerated agents of this class have been developed. The safety profile is different with reversible monoamine oxidase inhibitors, such as moclobemide, where the risk of serious dietary interactions is negligible and dietary restrictions are less strict. For children, adolescents, and probably young adults between 18 and 24 years old, there is a higher risk of both suicidal ideations and suicidal behavior in those treated with SSRIs. For adults, it is unclear whether SSRIs affect the risk of suicidality. One review found no connection; another an increased risk; and a third no risk in those 25–65 years old and a decreased risk in those more than 65. A black box warning was introduced in the United States in 2007 on SSRIs and other antidepressant medications due to the increased risk of suicide in patients younger than 24 years old. Similar precautionary notice revisions were implemented by the Japanese Ministry of Health. There is some evidence that omega-3 fatty acids fish oil supplements containing high levels of eicosapentaenoic acid (EPA) to docosahexaenoic acid (DHA) are effective in the treatment of, but not the prevention of major depression. However, a Cochrane review determined there was insufficient high quality evidence to suggest omega-3 fatty acids were effective in depression. There is limited evidence that vitamin D supplementation is of value in alleviating the symptoms of depression in individuals who are vitamin D-deficient. There is some preliminary evidence that COX-2 inhibitors, such as celecoxib, have a beneficial effect on major depression. Lithium appears effective at lowering the risk of suicide in those with bipolar disorder and unipolar depression to nearly the same levels as the general population. There is a narrow range of effective and safe dosages of lithium thus close monitoring may be needed. Low-dose thyroid hormone may be added to existing antidepressants to treat persistent depression symptoms in people who have tried multiple courses of medication. Limited evidence suggests stimulants, such as amphetamine and modafinil, may be effective in the short term, or as adjuvant therapy. Also, it is suggested that folate supplements may have a role in depression management. There is tentative evidence for benefit from testosterone in males. Electroconvulsive therapy (ECT) is a standard psychiatric treatment in which seizures are electrically induced in patients to provide relief from psychiatric illnesses. ECT is used with informed consent as a last line of intervention for major depressive disorder. A round of ECT is effective for about 50% of people with treatment-resistant major depressive disorder, whether it is unipolar or bipolar. Follow-up treatment is still poorly studied, but about half of people who respond relapse within twelve months. Aside from effects in the brain, the general physical risks of ECT are similar to those of brief general anesthesia. Immediately following treatment, the most common adverse effects are confusion and memory loss. ECT is considered one of the least harmful treatment options available for severely depressed pregnant women. A usual course of ECT involves multiple administrations, typically given two or three times per week, until the patient is no longer suffering symptoms. ECT is administered under anesthesia with a muscle relaxant. Electroconvulsive therapy can differ in its application in three ways: electrode placement, frequency of treatments, and the electrical waveform of the stimulus. These three forms of application have significant differences in both adverse side effects and symptom remission. After treatment, drug therapy is usually continued, and some patients receive maintenance ECT. ECT appears to work in the short term via an anticonvulsant effect mostly in the frontal lobes, and longer term via neurotrophic effects primarily in the medial temporal lobe. Transcranial magnetic stimulation (TMS) or deep transcranial magnetic stimulation is a noninvasive method used to stimulate small regions of the brain. TMS was approved by the FDA for treatment-resistant major depressive disorder (trMDD) in 2008 and as of 2014 evidence supports that it is probably effective. The American Psychiatric Association the Canadian Network for Mood and Anxiety Disorders, and the Royal Australia and New Zealand College of Psychiatrists have endorsed TMS for trMDD. Transcranial direct current stimulation (tDCS) is another noninvasive method used to stimulate small regions of the brain with the help of a weak electric current. Increasing evidence has been gathered for its efficiency as a depression treatment. A meta-analysis was published in 2020 summarising results across nine studies (572 participants) concluded that active tDCS was significantly superior to sham for response (30.9% vs. 18.9%, respectively), remission (19.9% vs. 11.7%) and depression improvement. According to a 2016 meta analysis, 34% of tDCS-treated patients showed at least 50% symptom reduction compared to 19% sham-treated across 6 randomised controlled trials. Bright light therapy reduces depression symptom severity, with benefit for both seasonal affective disorder and for nonseasonal depression, and an effect similar to those for conventional antidepressants. For nonseasonal depression, adding light therapy to the standard antidepressant treatment was not effective. For nonseasonal depression, where light was used mostly in combination with antidepressants or wake therapy, a moderate effect was found, with response better than control treatment in high-quality studies, in studies that applied morning light treatment, and with people who respond to total or partial sleep deprivation. Both analyses noted poor quality, short duration, and small size of most of the reviewed studies. There is insufficient evidence for Reiki and dance movement therapy in depression. As of 2019 cannabis is specifically not recommended as a treatment. Major depressive episodes often resolve over time whether or not they are treated. Outpatients on a waiting list show a 10–15% reduction in symptoms within a few months, with approximately 20% no longer meeting the full criteria for a depressive disorder. The median duration of an episode has been estimated to be 23 weeks, with the highest rate of recovery in the first three months. Studies have shown that 80% of those suffering from their first major depressive episode will suffer from at least one more during their life, with a lifetime average of 4 episodes. Other general population studies indicate that around half those who have an episode recover (whether treated or not) and remain well, while the other half will have at least one more, and around 15% of those experience chronic recurrence. Studies recruiting from selective inpatient sources suggest lower recovery and higher chronicity, while studies of mostly outpatients show that nearly all recover, with a median episode duration of 11 months. Around 90% of those with severe or psychotic depression, most of whom also meet criteria for other mental disorders, experience recurrence. A high proportion of people who experience full symptomatic remission still have at least one not fully resolved symptom after treatment. Recurrence or chronicity is more likely if symptoms have not fully resolved with treatment. Current guidelines recommend continuing antidepressants for four to six months after remission to prevent relapse. Evidence from many randomized controlled trials indicate continuing antidepressant medications after recovery can reduce the chance of relapse by 70% (41% on placebo vs. 18% on antidepressant). The preventive effect probably lasts for at least the first 36 months of use. People experiencing repeated episodes of depression require ongoing treatment in order to prevent more severe, long-term depression. In some cases, people must take medications for the rest of their lives. Cases when outcome is poor are associated with inappropriate treatment, severe initial symptoms including psychosis, early age of onset, previous episodes, incomplete recovery after one year of treatment, pre-existing severe mental or medical disorder, and family dysfunction. Depressed individuals have a shorter life expectancy than those without depression, in part because depressed patients are at risk of dying of suicide. However, they also have a higher rate of dying from other causes, being more susceptible to medical conditions such as heart disease. Up to 60% of people who die of suicide have a mood disorder such as major depression, and the risk is especially high if a person has a marked sense of hopelessness or has both depression and borderline personality disorder. The lifetime risk of suicide associated with a diagnosis of major depression in the US is estimated at 3.4%, which averages two highly disparate figures of almost 7% for men and 1% for women (although suicide attempts are more frequent in women). The estimate is substantially lower than a previously accepted figure of 15%, which had been derived from older studies of hospitalized patients. Major depression is currently the leading cause of disease burden in North America and other high-income countries, and the fourth-leading cause worldwide. In the year 2030, it is predicted to be the second-leading cause of disease burden worldwide after HIV, according to the WHO. Delay or failure in seeking treatment after relapse and the failure of health professionals to provide treatment are two barriers to reducing disability. Major depressive disorder affected approximately 163 million people in 2017 (2% of the global population). The percentage of people who are affected at one point in their life varies from 7% in Japan to 21% in France. In most countries the number of people who have depression during their lives falls within an 8–18% range. In North America, the probability of having a major depressive episode within a year-long period is 3–5% for males and 8–10% for females. Major depression is about twice as common in women as in men, although it is unclear why this is so, and whether factors unaccounted for are contributing to this. The relative increase in occurrence is related to pubertal development rather than chronological age, reaches adult ratios between the ages of 15 and 18, and appears associated with psychosocial more than hormonal factors. Depression is a major cause of disability worldwide. People are most likely to develop their first depressive episode between the ages of 30 and 40, and there is a second, smaller peak of incidence between ages 50 and 60. The risk of major depression is increased with neurological conditions such as stroke, Parkinson's disease, or multiple sclerosis, and during the first year after childbirth. It is also more common after cardiovascular illnesses, and is related more to those with a poor cardiac disease outcome than to a better one. Studies conflict on the prevalence of depression in the elderly, but most data suggest there is a reduction in this age group. Depressive disorders are more common in urban populations than in rural ones and the prevalence is increased in groups with poorer socioeconomic factors, e.g., homelessness. The Ancient Greek physician Hippocrates described a syndrome of melancholia as a distinct disease with particular mental and physical symptoms; he characterized all "fears and despondencies, if they last a long time" as being symptomatic of the ailment. It was a similar but far broader concept than today's depression; prominence was given to a clustering of the symptoms of sadness, dejection, and despondency, and often fear, anger, delusions and obsessions were included. The term "depression" itself was derived from the Latin verb "deprimere", "to press down". From the 14th century, "to depress" meant to subjugate or to bring down in spirits. It was used in 1665 in English author Richard Baker's "Chronicle" to refer to someone having "a great depression of spirit", and by English author Samuel Johnson in a similar sense in 1753. The term also came into use in physiology and economics. An early usage referring to a psychiatric symptom was by French psychiatrist Louis Delasiauve in 1856, and by the 1860s it was appearing in medical dictionaries to refer to a physiological and metaphorical lowering of emotional function. Since Aristotle, melancholia had been associated with men of learning and intellectual brilliance, a hazard of contemplation and creativity. The newer concept abandoned these associations and through the 19th century, became more associated with women. Although "melancholia" remained the dominant diagnostic term, "depression" gained increasing currency in medical treatises and was a synonym by the end of the century; German psychiatrist Emil Kraepelin may have been the first to use it as the overarching term, referring to different kinds of melancholia as "depressive states". Sigmund Freud likened the state of melancholia to mourning in his 1917 paper "Mourning and Melancholia". He theorized that objective loss, such as the loss of a valued relationship through death or a romantic break-up, results in subjective loss as well; the depressed individual has identified with the object of affection through an unconscious, narcissistic process called the "libidinal cathexis" of the ego. Such loss results in severe melancholic symptoms more profound than mourning; not only is the outside world viewed negatively but the ego itself is compromised. The patient's decline of self-perception is revealed in his belief of his own blame, inferiority, and unworthiness. He also emphasized early life experiences as a predisposing factor. Adolf Meyer put forward a mixed social and biological framework emphasizing "reactions" in the context of an individual's life, and argued that the term "depression" should be used instead of "melancholia". The first version of the DSM (DSM-I, 1952) contained "depressive reaction" and the DSM-II (1968) "depressive neurosis", defined as an excessive reaction to internal conflict or an identifiable event, and also included a depressive type of manic-depressive psychosis within Major affective disorders. In the mid-20th century, researchers theorized that depression was caused by a chemical imbalance in neurotransmitters in the brain, a theory based on observations made in the 1950s of the effects of reserpine and isoniazid in altering monoamine neurotransmitter levels and affecting depressive symptoms. The chemical imbalance theory has never been proven. The term "unipolar" (along with the related term "bipolar") was coined by the neurologist and psychiatrist Karl Kleist, and subsequently used by his disciples Edda Neele and Karl Leonhard. The term "Major depressive disorder" was introduced by a group of US clinicians in the mid-1970s as part of proposals for diagnostic criteria based on patterns of symptoms (called the "Research Diagnostic Criteria", building on earlier Feighner Criteria), and was incorporated into the DSM-III in 1980. To maintain consistency the ICD-10 used the same criteria, with only minor alterations, but using the DSM diagnostic threshold to mark a "mild depressive episode", adding higher threshold categories for moderate and severe episodes. The ancient idea of "melancholia" still survives in the notion of a melancholic subtype. The new definitions of depression were widely accepted, albeit with some conflicting findings and views. There have been some continued empirically based arguments for a return to the diagnosis of melancholia. There has been some criticism of the expansion of coverage of the diagnosis, related to the development and promotion of antidepressants and the biological model since the late 1950s. The term "depression" is used in a number of different ways. It is often used to mean this syndrome but may refer to other mood disorders or simply to a low mood. People's conceptualizations of depression vary widely, both within and among cultures. "Because of the lack of scientific certainty," one commentator has observed, "the debate over depression turns on questions of language. What we call it—'disease,' 'disorder,' 'state of mind'—affects how we view, diagnose, and treat it." There are cultural differences in the extent to which serious depression is considered an illness requiring personal professional treatment, or is an indicator of something else, such as the need to address social or moral problems, the result of biological imbalances, or a reflection of individual differences in the understanding of distress that may reinforce feelings of powerlessness, and emotional struggle. The diagnosis is less common in some countries, such as China. It has been argued that the Chinese traditionally deny or somatize emotional depression (although since the early 1980s, the Chinese denial of depression may have modified). Alternatively, it may be that Western cultures reframe and elevate some expressions of human distress to disorder status. Australian professor Gordon Parker and others have argued that the Western concept of depression "medicalizes" sadness or misery. Similarly, Hungarian-American psychiatrist Thomas Szasz and others argue that depression is a metaphorical illness that is inappropriately regarded as an actual disease. There has also been concern that the DSM, as well as the field of descriptive psychiatry that employs it, tends to reify abstract phenomena such as depression, which may in fact be social constructs. American archetypal psychologist James Hillman writes that depression can be healthy for the soul, insofar as "it brings refuge, limitation, focus, gravity, weight, and humble powerlessness." Hillman argues that therapeutic attempts to eliminate depression echo the Christian theme of resurrection, but have the unfortunate effect of demonizing a soulful state of being. Historical figures were often reluctant to discuss or seek treatment for depression due to social stigma about the condition, or due to ignorance of diagnosis or treatments. Nevertheless, analysis or interpretation of letters, journals, artwork, writings, or statements of family and friends of some historical personalities has led to the presumption that they may have had some form of depression. People who may have had depression include English author Mary Shelley, American-British writer Henry James, and American president Abraham Lincoln. Some well-known contemporary people with possible depression include Canadian songwriter Leonard Cohen and American playwright and novelist Tennessee Williams. Some pioneering psychologists, such as Americans William James and John B. Watson, dealt with their own depression. There has been a continuing discussion of whether neurological disorders and mood disorders may be linked to creativity, a discussion that goes back to Aristotelian times. British literature gives many examples of reflections on depression. English philosopher John Stuart Mill experienced a several-months-long period of what he called "a dull state of nerves", when one is "unsusceptible to enjoyment or pleasurable excitement; one of those moods when what is pleasure at other times, becomes insipid or indifferent". He quoted English poet Samuel Taylor Coleridge's "Dejection" as a perfect description of his case: "A grief without a pang, void, dark and drear, / A drowsy, stifled, unimpassioned grief, / Which finds no natural outlet or relief / In word, or sigh, or tear." English writer Samuel Johnson used the term "the black dog" in the 1780s to describe his own depression, and it was subsequently popularized by depression sufferer former British Prime Minister Sir Winston Churchill. Social stigma of major depression is widespread, and contact with mental health services reduces this only slightly. Public opinions on treatment differ markedly to those of health professionals; alternative treatments are held to be more helpful than pharmacological ones, which are viewed poorly. In the UK, the Royal College of Psychiatrists and the Royal College of General Practitioners conducted a joint Five-year Defeat Depression campaign to educate and reduce stigma from 1992 to 1996; a MORI study conducted afterwards showed a small positive change in public attitudes to depression and treatment. Depression is especially common among those over 65 years of age and increases in frequency beyond this age. In addition, the risk of depression increases in relation to the frailty of the individual. Depression is one of the most important factors which negatively impact quality of life in adults, as well as the elderly. Both symptoms and treatment among the elderly differ from those of the rest of the population. As with many other diseases, it is common among the elderly not to present with classical depressive symptoms. Diagnosis and treatment is further complicated in that the elderly are often simultaneously treated with a number of other drugs, and often have other concurrent diseases. Treatment differs in that studies of SSRIs have shown lesser and often inadequate effects among the elderly, while other drugs, such as duloxetine (a serotonin-norepinephrine reuptake inhibitor), with more clear effects have adverse effects, such as dizziness, dryness of the mouth, diarrhea and constipation, which can be especially difficult to handle among the elderly. Problem solving therapy was, as of 2015, the only psychological therapy with proven effect, and can be likened to a simpler form of cognitive behavioral therapy. However, elderly with depression are seldom offered any psychological treatment, and the evidence proving other treatments effective is incomplete. ECT has been used in the elderly, and register-studies suggest it is effective, although less so as compared to the rest of the population. The risks involved with treatment of depression among the elderly as opposed to benefits are not entirely clear. MRI scans of patients with depression have revealed a number of differences in brain structure compared to those who are not depressed. Meta-analyses of neuroimaging studies in major depression reported that, compared to controls, depressed patients had increased volume of the lateral ventricles and adrenal gland and smaller volumes of the basal ganglia, thalamus, hippocampus, and frontal lobe (including the orbitofrontal cortex and gyrus rectus). Hyperintensities have been associated with patients with a late age of onset, and have led to the development of the theory of vascular depression. Trials are looking at the effects of botulinum toxins on depression. The idea is that the drug is used to make the person look less frowning and that this stops the negative facial feedback from the face. In 2015 results showed, however, that the partly positive effects that had been observed until then could have been due to placebo effects. In 2018-2019, the US Food and Drug Administration (FDA) granted Breakthrough therapy designation to Compass Pathways and, separately, Usona Institute. Compass is a for-profit company studying psilocybin for treatment-resistant depression; Usona is a non-profit organization studying psilocybin for major depressive disorder more broadly. Models of depression in animals for the purpose of study include iatrogenic depression models (such as drug-induced), forced swim tests, tail suspension test, and learned helplessness models. Criteria frequently used to assess depression in animals include expression of despair, neurovegetative changes, and anhedonia, as many other criteria for depression are untestable in animals, such as guilt and suicidality.
https://en.wikipedia.org/wiki?curid=8389
Diana (mythology) Diana is a goddess in Roman and Hellenistic religion, primarily considered a patroness of the countryside, hunters, crossroads, and the Moon. She is equated with the Greek goddess Artemis, and absorbed much of Artemis' mythology early in Roman history, including a birth on the island of Delos to parents Jupiter and Latona, and a twin brother, Apollo, though she had an independent origin in Italy. Diana is considered a virgin goddess and protector of childbirth. Historically, Diana made up a triad with two other Roman deities: Egeria the water nymph, her servant and assistant midwife; and Virbius, the woodland god. Diana is revered in modern neopagan religions including Roman neopaganism, Stregheria, and Wicca. From the medieval to the modern period, as folklore attached to her developed and was eventually adapted into neopagan religions, the mythology surrounding Diana grew to include a consort (Lucifer) and daughter (Aradia), figures sometimes recognized by modern traditions. In the ancient, medieval, and modern periods, Diana has been considered a triple deity, merged with a goddess of the moon (Luna/Selene) and the underworld (usually Hecate). The name "Dīāna" probably derives from Latin "dīus" ('godly'), ultimately from Proto-Italic *"divios" ("diwios"), meaning 'divine, heavenly'. It stems from Proto-Indo-European "*diwyós" ('divine, heavenly'), formed with the root "*dyew-" ('daylight sky') attached the thematic suffix -"yós". Cognates appear in Myceanean Greek "di-wi-ja", in Ancient Greek "dîos" (δῖος; 'belonging to heaven, godlike'), or in Sanskrit "divyá" ('heavenly'). The ancient Latin writers Varro and Cicero considered the etymology of Dīāna as allied to that of "dies" and connected to the shine of the Moon, noting that one of her titles is Diana Lucifera ("light-bearer"). ... people regard Diana and the moon as one and the same. ... the moon "(luna)" is so called from the verb to shine "(lucere)". Lucina is identified with it, which is why in our country they invoke Juno Lucina in childbirth, just as the Greeks call on Diana the Light-bearer. Diana also has the name "Omnivaga" ("wandering everywhere"), not because of her hunting but because she is numbered as one of the seven planets; her name Diana derives from the fact that she turns darkness into daylight "(dies)". She is invoked at childbirth because children are born occasionally after seven, or usually after nine, lunar revolutions ... The persona of Diana is complex, and contains a number of archaic features. Diana was originally considered to be a goddess of the wilderness and of the hunt, a central sport in both Roman and Greek culture. Early Roman inscriptions to Diana celebrated her primarily as a huntress and patron of hunters. Later, in the Hellenistic period, Diana came to be equally or more revered as a goddess not of the wild woodland but of the "tame" countryside, or "villa rustica", the idealization of which was common in Greek thought and poetry. This dual role as goddess of both civilization and the wild, and therefore the civilized countryside, first applied to the Greek goddess Artemis (for example, in the 3rd century BCE poetry of Anacreon). By the 3rd century CE, after Greek influence had a profound impact on Roman religion, Diana had been almost fully combined with Artemis and took on many of her attributes, both in her spiritual domains and in the description of her appearance. The Roman poet Nemesianus wrote a typical description of Diana: She carried a bow and a quiver full of golden arrows, wore a golden cloak, purple half-boots, and a belt with a jeweled buckle to hold her tunic together, and wore her hair gathered in a ribbon. By the 5th century CE, almost a millennia after her cult's entry into Rome, the philosopher Proclus could still characterize Diana as "the inspective guardian of every thing rural, [who] represses every thing rustic and uncultivated." Diana was often considered an aspect of a triple goddess, known as "Diana triformis": Diana, Luna, and Hecate. According to historian C.M. Green, "these were neither different goddesses nor an amalgamation of different goddesses. They were Diana...Diana as huntress, Diana as the moon, Diana of the underworld." At her sacred grove on the shores of Lake Nemi, Diana was venerated as a triple goddess beginning in the late 6th century BCE. Andreas Alföldi interpreted an image on a late Republican coin as the Latin Diana "conceived as a threefold unity of the divine huntress, the Moon goddess and the goddess of the nether world, Hekate". This coin, minted by P. Accoleius Lariscolus in 43 BCE, has been acknowledged as representing an archaic statue of Diana Nemorensis. It represents Artemis with the bow at one extremity, Luna-Selene with flowers at the other and a central deity not immediately identifiable, all united by a horizontal bar. The iconographical analysis allows the dating of this image to the 6th century at which time there are Etruscan models. The coin shows that the triple goddess cult image still stood in the "lucus" of Nemi in 43 BCE. Lake Nemi was called "Triviae lacus" by Virgil ("Aeneid" 7.516), while Horace called Diana "montium custos nemoremque virgo" ("keeper of the mountains and virgin of Nemi") and "diva triformis" ("three-form goddess"). Two heads found in the sanctuary and the Roman theatre at Nemi, which have a hollow on their back, lend support to this interpretation of an archaic triple Diana. The earliest epithet of Diana was "Trivia", and she was addressed with that title by Virgil, Catullus, and many others. "Trivia" comes from the Latin "trivium", "triple way", and refers to Diana's guardianship over roadways, particularly Y-junctions or three-way crossroads. This role carried a somewhat dark and dangerous connotation, as it metaphorically pointed the way to the underworld. In the 1st-century CE play "Medea", Seneca's titular sorceress calls on Trivia to cast a magic spell. She evokes the triple goddess of Diana, Selene, and Hecate, and specifies that she requires the powers of the latter. The 1st century poet Horace similarly wrote of a magic incantation invoking the power of both Diana and Proserpina. The symbol of the crossroads is relevant to several aspects of Diana's domain. It can symbolize the paths hunters may encounter in the forest, lit only by the full moon; this symbolizes making choices "in the dark" without the light of guidance. Diana's role as a goddess of the underworld, or at least of ushering people between life and death, caused her early on to be conflated with Hecate (and occasionally also with Proserpina). However, her role as an underworld goddess appears to pre-date strong Greek influence (though the early Greek colony of Cumae had a cult of Hekate and certainly had contacts with the Latins). A theater in her sanctuary at Lake Nemi included a pit and tunnel that would have allowed actors to easily descend on one side of the stage and ascend on the other, indicating a connection between the phases of the moon and a descent by the moon goddess into the underworld. It is likely that her underworld aspect in her original Latin worship did not have a distinct name, like Luna was for her moon aspect. This is due to a seeming reluctance or taboo by the early Latins to name underworld deities, and the fact that they believed the underworld to be silent, precluding naming. Hekate, a Greek goddess also associated with the boundary between the earth and the underworld, became attached to Diana as a name for her underworld aspect following Greek influence. Diana was often considered to be a goddess associated with fertility and childbirth, and the protection of women during labor. This probably arose as an extension of her association with the moon, whose cycles were believed to parallel the menstrual cycle, and which was used to track the months during pregnancy. At her shrine in Aricia, worshipers left votive terracotta offerings for the goddess in the shapes of babies and wombs, and the temple there also offered care of pups and pregnant dogs. This care of infants also extended to the training of both young people and dogs, especially for hunting. In her role as a protector of childbirth, Diana was called "Diana Lucina" or even "Juno Lucina", because her domain overlapped with that of the goddess Juno. The title of Juno may also have had an independent origin as it applied to Diana, with the literal meaning of "helper" - Diana as "Juno Lucina" would be the "helper of childbirth". According to a theory proposed by Georges Dumézil, Diana falls into a particular subset of celestial gods, referred to in histories of religion as "frame gods". Such gods, while keeping the original features of celestial divinities (i.e. transcendent heavenly power and abstention from direct rule in worldly matters), did not share the fate of other celestial gods in Indoeuropean religions - that of becoming "dei otiosi", or gods without practical purpose, since they did retain a particular sort of influence over the world and mankind. The celestial character of Diana is reflected in her connection with inaccessibility, virginity, light, and her preference for dwelling on high mountains and in sacred woods. Diana, therefore, reflects the heavenly world in its sovereignty, supremacy, impassibility, and indifference towards such secular matters as the fates of mortals and states. At the same time, however, she is seen as active in ensuring the succession of kings and in the preservation of humankind through the protection of childbirth. These functions are apparent in the traditional institutions and cults related to the goddess: According to Dumezil, the forerunner of all "frame gods" is an Indian epic hero who was the image (avatar) of the Vedic god Dyaus. Having renounced the world, in his roles of father and king, he attained the status of an immortal being while retaining the duty of ensuring that his dynasty is preserved and that there is always a new king for each generation. The Scandinavian god Heimdallr performs an analogous function: he is born first and will die last. He too gives origin to kingship and the first king, bestowing on him regal prerogatives. Diana, although a female deity, has exactly the same functions, preserving mankind through childbirth and royal succession. F. H. Pairault, in her essay on Diana, qualified Dumézil's theory as ""impossible to verify"". Unlike the Greek gods, Roman gods were originally considered to be numina: divine powers of presence and will that did not necessarily have physical form. At the time Rome was founded, Diana and the other major Roman gods probably did not have much mythology per se, or any depictions in human form. The idea of gods as having anthropomorphic qualities and human-like personalities and actions developed later, under the influence of Greek and Etruscan religion. By the 3rd century BCE, Diana is found listed among the twelve major gods of the Roman pantheon by the poet Ennius. Though the Capitoline Triad were the primary state gods of Rome, early Roman myth did not assign a strict hierarchy to the gods the way Greek mythology did, though the Greek hierarchy would eventually be adopted by Roman religion as well. Once Greek influence had caused Diana to be considered identical to the Greek goddess Artemis, Diana acquired Artemis's physical description, attributes, and variants of her myths as well. Like Artemis, Diana is usually depicted in art wearing a short skirt, with a hunting bow and quiver, and often accompanied by hunting dogs. A 1st-century BCE Roman coin (see above) depicted her with a unique, short hairstyle, and in triple form, with one form holding a bow and another holding a poppy. When worship of Apollo was first introduced to Rome, Diana became conflated with Apollo's sister Artemis as in the earlier Greek myths, and as such she became identified as the daughter of Apollo's parents Latona and Jupiter. Though Diana was usually considered to be a virgin goddess like Artemis, later authors sometimes attributed consorts and children to her. According to Cicero and Ennius, Trivia (an epithet of Diana) and Caelus were the parents of Janus, as well as of Saturn and Ops. According to Macrobius (who cited Nigidius Figulus and Cicero), Janus and Jana (Diana) are a pair of divinities, worshiped as the sun and moon. Janus was said to receive sacrifices before all the others because, through him, the way of access to the desired deity is made apparent. Diana's mythology incorporated stories which were variants of earlier stories about Artemis. Possibly the most well-known of these is the myth of Actaeon. In Ovid's version of this myth, part of his poem "Metamorphoses", he tells of a pool or grotto hidden in the wooded valley of Gargaphie. There, Diana, the goddess of the woods, would bathe and rest after a hunt. Actaeon, a young hunter, stumbled across the grotto and accidentally witnessed the goddess bathing without invitation. In retaliation, Diana splashed him with water from the pool, cursing him, and he transformed into a deer. His own hunting dogs caught his scent, and tore him apart. Ovid's version of the myth of Actaeon differs from most earlier sources. Unlike earlier myths about Artemis, Actaeon is killed for an innocent mistake, glimpsing Diana bathing. An earlier variant of this myth, known as the Bath of Pallas, had the hunter intentionally spy on the bathing goddess Pallas (Athena), and earlier versions of the myth involving Artemis did not involve the bath at all. Diana was an ancient goddess common to all Latin tribes. Therefore, many sanctuaries were dedicated to her in the lands inhabited by Latins. Her primary sanctuary was a woodland grove overlooking Lake Nemi, a body of water also known as "Diana's Mirror", where she was worshiped as Diana Nemorensis, or "Diana of the Wood". In Rome, the cult of Diana may have been almost as old as the city itself. Varro mentions her in the list of deities to whom king Titus Tatius promised to build a shrine. His list included Luna and Diana Lucina as separate entities. Another testimony to the antiquity of her cult is to be found in the "lex regia" of King Tullus Hostilius that condemns those guilty of incest to the "sacratio" to Diana. She had a temple in Rome on the Aventine Hill, according to tradition dedicated by king Servius Tullius. Its location is remarkable as the Aventine is situated outside the pomerium, i.e. original territory of the city, in order to comply with the tradition that Diana was a goddess common to all Latins and not exclusively of the Romans. Being placed on the Aventine, and thus outside the "pomerium", meant that Diana's cult essentially remained a "foreign" one, like that of Bacchus; she was never officially "transferred" to Rome as Juno was after the sack of Veii. Other known sanctuaries and temples to Diana include Colle di Corne near Tusculum, where she is referred to with the archaic Latin name of "deva Cornisca" and where existed a collegium of worshippers; at Évora, Portugal; Mount Algidus, also near Tusculum; at Lavinium; and at Tibur (Tivoli), where she is referred to as "Diana Opifera Nemorensis". Diana was also worshiped at a sacred wood mentioned by Livy - "ad compitum Anagninum" (near Anagni), and on Mount Tifata in Campania. According to Plutarch, men and women alike were worshipers of Diana and were welcomed into all of her temples. The one exception seems to have been a temple on the Vicus Patricius, which men either did not enter due to tradition, or were not allowed to enter. Plutarch related a legend that a man had attempted to assault a woman worshiping in this temple and was killed by a pack of dogs (echoing the myth of Diana and Actaeon), which resulted in a superstition against men entering the temple. A feature common to nearly all of Diana's temples and shrines by the second century AD was the hanging up of stag antlers. Plutarch noted that the only exception to this was the temple on the Aventine Hill, in which bull horns had been hung up instead. Plutarch explains this by way of reference to a legend surrounding the sacrifice of an impressive Sabine bull by King Servius at the founding of the Aventine temple. Diana's worship may have originated at an open-air sanctuary overlooking Lake Nemi in the Alban Hills near Aricia, where she was worshiped as Diana Nemorensis, or ("Diana of the Sylvan Glade"). According to legendary accounts, the sanctuary was founded by Orestes and Iphigenia after they fled from the Tauri. In this tradition, the Nemi sanctuary was supposedly built on the pattern of an earlier Temple of Artemis Tauropolos, and the first cult statue at Nemi was said to have been stolen from the Tauri and brought to Nemi by Orestes. Historical evidence suggests that worship of Diana at Nemi flourished from at least the 6th century BCE until the 2nd century CE. Her cult there was first attested in Latin literature by Cato the Elder, in a surviving quote by the late grammarian Priscian. By the 4th century BCE, the simple shrine at Nemi had been joined by a temple complex. The sanctuary served an important political role as it was held in common by the Latin League. A festival to Diana, the Nemoralia, was held yearly at Nemi on the Ides of August (August 13–15). Worshipers traveled to Nemi carrying torches and garlands, and once at the lake, they left pieces of thread tied to fences and tablets inscribed with prayers. Diana's festival eventually became widely celebrated throughout Italy, which was unusual given the provincial nature of Diana's cult. The poet Statius wrote of the festival: Statius describes the triple nature of the goddess by invoking heavenly (the stars), earthly (the grove itself) and underworld (Hecate) imagery. He also suggests by the garlanding of the dogs and polishing of the spears that no hunting was allowed during the festival. Legend has it that Diana's high priest at Nemi, known as the Rex Nemorensis, was always an escaped slave who could only obtain the position by defeating his predecessor in a fight to the death. Sir James George Frazer wrote of this sacred grove in "The Golden Bough", basing his interpretation on brief remarks in Strabo (5.3.12), Pausanias (2,27.24) and Servius' commentary on the "Aeneid" (6.136). The legend tells of a tree that stood in the center of the grove and was heavily guarded. No one was allowed to break off its limbs, with the exception of a runaway slave, who was allowed, if he could, to break off one of the boughs. He was then in turn granted the privilege to engage the Rex Nemorensis, the current king and priest of Diana, in a fight to the death. If the slave prevailed, he became the next king for as long as he could defeat his challengers. However, Joseph Fontenrose criticised Frazer's assumption that a rite of this sort actually occurred at the sanctuary, and no contemporary records exist that support the historical existence of the "Rex Nemorensis". Rome hoped to unify into and control the Latin tribes around Nemi, so Diana's worship was imported to Rome as a show of political solidarity. Diana soon afterwards became Hellenized, and combined with the Greek goddess Artemis, "a process which culminated with the appearance of Diana beside Apollo [the brother of Artemis] in the first "lectisternium" at Rome" in 399 BCE. The process of identification between the two goddesses probably began when artists who were commissioned to create new cult statues for Diana's temples outside Nemi were struck by the similar attributes between Diana and the more familiar Artemis, and sculpted Diana in a manner inspired by previous depictions of Artemis. Sibyllene influence and trade with Massilia, where similar cult statues of Artemis existed, would have completed the process. According to Françoise Hélène Pairault's study, historical and archaeological evidence point to the fact that the characteristics given to both Diana of the Aventine Hill and Diana Nemorensis were the product of the direct or indirect influence of the cult of Artemis, which was spread by the Phoceans among the Greek towns of Campania Cuma and Capua, who in turn had passed it over to the Etruscans and the Latins by the 6th and 5th centuries BCE. Evidence suggests that a confrontation occurred between two groups of Etruscans who fought for supremacy, those from Tarquinia, Vulci and Caere (allied with the Greeks of Capua) and those of Clusium. This is reflected in the legend of the coming of Orestes to Nemi and of the inhumation of his bones in the Roman Forum near the temple of Saturn. The cult introduced by Orestes at Nemi is apparently that of the Artemis Tauropolos. The literary amplification reveals a confused religious background: different versions of Artemis were conflated under the epithet. As far as Nemi's Diana is concerned there are two different versions, by Strabo and Servius Honoratus. Strabo's version looks to be the most authoritative as he had access to first-hand primary sources on the sanctuaries of Artemis, i.e. the priest of Artemis Artemidoros of Ephesus. The meaning of "Tauropolos" denotes an Asiatic goddess with lunar attributes, lady of the herds. The only possible "interpretatio graeca" of high antiquity concerning "Diana Nemorensis" could have been the one based on this ancient aspect of a deity of light, master of wildlife. "Tauropolos" is an ancient epithet attached to Artemis, Hecate, and even Athena. According to the legend Orestes founded Nemi together with Iphigenia. At Cuma the Sybil is the priestess of both Phoibos and Trivia. Hesiod and Stesichorus tell the story according to which after her death Iphigenia was divinised under the name of Hecate, a fact which would support the assumption that Artemis Tauropolos had a real ancient alliance with the heroine, who was her priestess in Taurid and her human paragon. This religious complex is in turn supported by the triple statue of Artemis-Hecate. In Rome, Diana was regarded with great reverence and was a patroness of lower-class citizens, called plebeians, as well as slaves, who could receive asylum in her temples. Georg Wissowa proposed that this might be because the first slaves of the Romans were Latins of the neighboring tribes. However, the Temple of Artemis at Ephesus had the same custom of the asylum. Worship of Diana probably spread into the city of Rome beginning around 550 BCE, during her Hellenization and combination with the Greek goddess Artemis. Diana was first worshiped along with her brother and mother, Apollo and Latona, in their temple in the Campus Martius, and later in the Temple of Apollo Palatinus. The first major temple dedicated primarily to Diana in the vicinity of Rome was the Temple of Diana Aventina (Diana of the Aventine Hill). According to the Roman historian Livy, the construction of this temple began in the 6th century BCE and was inspired by stories of the massive Temple of Artemis at Ephesus, which was said to have been built through the combined efforts of all the cities of Asia Minor. Legend has it that Servius Tullius was impressed with this act of massive political and economic cooperation, and convinced the cities of the Latin League to work with the Romans to build their own temple to the goddess. However, there is no compelling evidence for such an early construction of the temple, and it is more likely that it was built in the 3rd century BCE, following the influence of the temple at Nemi, and probably about the same time the first temples to Vertumnus (who was associated with Diana) were built in Rome (264 BCE). The misconception that the Aventine Temple was inspired by the Ephesian Temple might originate in the fact that the cult images and statues used at the former were based heavily on those found in the latter. Whatever its initial construction date, records show that the Avantine Temple was rebuilt by Lucius Cornificius in 32 BCE. If it was still in use by the 4th century CE, the Aventine temple would have been permanently closed during the persecution of pagans in the late Roman Empire. Today, a short street named the "Via del Tempio di Diana" and an associated plaza, "Piazza del Tempio di Diana", commemorates the site of the temple. Part of its wall is located within one of the halls of the Apuleius restaurant. Later temple dedications often were based on the model for ritual formulas and regulations of the Temple of Diana. Roman politicians built several minor temples to Diana elsewhere in Rome to secure public support. One of these was built in the Campus Martius in 187 BCE; no Imperial period records of this temple have been found, and it is possible it was one of the temples demolished around 55 BCE in order to build a theater. Diana also had a public temple on the Quirinal Hill, the sanctuary of Diana Planciana. It was dedicated by Plancius in 55 BCE, though it is unclear which Plancius. In their worship of Artemis, Greeks filled their temples with sculptures of the goddess created by well-known sculptors, and many were adapted for use in the worship of Diana by the Romans, beginning around the 2nd century BCE (the beginning of a period of strong Hellenistic influence on Roman religion). The earliest depictions of the Artemis of Ephesus are found on Ephesian coins from this period. By the Imperial period, small marble statues of the Ephesian Artemis were being produced in the Western region of the Mediterranean and were often bought by Roman patrons. The Romans obtained a large copy of an Ephesian Artemis statue for their temple on the Aventine Hill. Diana was usually depicted for educated Romans in her Greek guise. If she was shown accompanied by a deer, as in the "Diana of Versailles", this is because Diana was the patroness of hunting. The deer may also offer a covert reference to the myth of Acteon (or Actaeon), who saw her bathing naked. Diana transformed Acteon into a stag and set his own hunting dogs to kill him. In Campania, Diana had a major temple at Mount Tifata, near Capua. She was worshiped there as "Diana Tifatina". This was one of the oldest sanctuaries in Campania. As a rural sanctuary, it included lands and estates that would have been worked by slaves following the Roman conquest of Campania, and records show that expansion and renovation projects at her temple were funded in part by other conquests by Roman military campaigns. The modern Christian church of Sant'Angelo in Formis was built on the ruins of the Tifata temple. In the Roman provinces, Diana was widely worshiped alongside local deities. Over 100 inscriptions to Diana have been cataloged in the provinces, mainly from Gaul, Upper Germania, and Britannia. Diana was commonly invoked alongside another forest god, Silvanus, as well as other "mountain gods". In the provinces, she was occasionally conflated with local goddesses such as Abnoba, and was given high status, with "Augusta" and "regina" ("queen") being common epithets. Diana was not only regarded as a goddess of the wilderness and the hunt, but was often worshiped as a patroness of families. She served a similar function to the hearth goddess Vesta, and was sometimes considered to be a member of the Penates, the deities most often invoked in household rituals. In this role, she was often given a name reflecting the tribe of family who worshiped her and asked for her protection. For example, in what is now Wiesbaden, Diana was worshiped as "Diana Mattiaca" by the Mattiaci tribe. Other family-derived named attested in the ancient literature include "Diana Cariciana", "Diana Valeriana", and "Diana Plancia". As a house goddess, Diana often became reduced in stature compared to her official worship by the Roman state religion. In personal or family worship, Diana was brought to the level of other household spirits, and was believed to have a vested interest in the prosperity of the household and the continuation of the family. The Roman poet Horace regarded Diana as a household goddess in his "Odes", and had an altar dedicated to her in his villa where household worship could be conducted. In his poetry, Horace deliberately contrasted the kinds of grand, elevated hymns to Diana on behalf of the entire Roman state, the kind of worship that would have been typical at her Aventine temple, with a more personal form of devotion. Images of Diana and her associated myths have been found on sarcophagi of wealthy Romans. They often included scenes depicting sacrifices to the goddess, and on at least one example, the deceased man is shown joining Diana's hunt. Since ancient times, philosophers and theologians have examined the nature of Diana in light of her worship traditions, attributes, mythology, and identification with other gods. Diana was initially a hunting goddess and goddess of the local woodland at Nemi, but as her worship spread, she acquired attributes of other similar goddesses. As she became conflated with Artemis, she became a moon goddess, identified with the other lunar goddesses goddess Luna and Hekate. She also became the goddess of childbirth and ruled over the countryside. Catullus wrote a poem to Diana in which she has more than one alias: Latonia, Lucina, Juno, Trivia, Luna. Along with Mars, Diana was often venerated at games held in Roman amphitheaters, and some inscriptions from the Danubian provinces show that she was conflated with Nemesis in this role, as "Diana Nemesis". Outside of Italy, Diana had important centers of worship where she was syncretised with similar local deities in Gaul, Upper Germania, and Britannia. Diana was particularly important in the region in and around the Black Forest, where she was conflated with the local goddess Abnoba and worshiped as "Diana Abnoba". Some late antique sources went even further, syncretizing many local "great goddesses" into a single "Queen of Heaven". The Platonist philosopher Apuleius, writing in the late 2nd century, depicted the goddess declaring: "I come, Lucius, moved by your entreaties: I, mother of the universe, mistress of all the elements, first-born of the ages, highest of the gods, queen of the shades, first of those who dwell in heaven, representing in one shape all gods and goddesses. My will controls the shining heights of heaven, the health-giving sea-winds, and the mournful silences of hell; the entire world worships my single godhead in a thousand shapes, with divers rites, and under many a different name. The Phrygians, first-born of mankind, call me the Pessinuntian Mother of the gods; the native Athenians the Cecropian Minerva; the island-dwelling Cypriots Paphian Venus; the archer Cretans Dictynnan Diana; the triple-tongued Sicilians Stygian Proserpine; the ancient Eleusinians Actaean Ceres; some call me Juno, some Bellona, others Hecate, others Rhamnusia; but both races of Ethiopians, those on whom the rising and those on whom the setting sun shines, and the Egyptians who excel in ancient learning, honour me with the worship which is truly mine and call me by my true name: Queen Isis." Later poets and historians looked to Diana's identity as a triple goddess to merge her with triads heavenly, earthly, and underworld (cthonic) goddesses. Maurus Servius Honoratus said that the same goddess was called Luna in heaven, Diana on earth, and Proserpina in hell. Michael Drayton praises the Triple Diana in poem "The Man in the Moone" (1606): "So these great three most powerful of the rest, Phoebe, Diana, Hecate, do tell. Her sovereignty in Heaven, in Earth and Hell". Based on the earlier writings of Plato, the Neoplatonist philosophers of late antiquity united the various major gods of Hellenic tradition into a series of monads containing within them triads, with some creating the world, some animating it or bringing it to life, and others harmonizing it. Within this system, Proclus considered Diana to be one of the primary animating, or life-giving, deities. Proclus, citing Orphic tradition, concludes that Diana "presides over all the generation in nature, and is the midwife of physical productive principles" and that she "extends these genitals, distributing as far as to subterranean natures the prolific power of [Bacchus]." Specifically, Proclus considered the life-generating principle of the highest order, within the Intellectual realm, to be Rhea, who he identified with Ceres. Within her divinity was produced the cause of the basic principle of life. Projecting this principle into the lower, Hypercosmic realm of reality generated a lower monad, Kore, who could therefore be understood as Ceres' "daughter". Kore embodied the "maidenly" principle of generation that, more importantly, included a principle of division - where Demeter generates life indiscriminately, Kore distributes it individually. This division results in another triad or trinity, known as the Maidenly trinity, within the monad of Kore: namely, Diana, Proserpine, and Minerva, through whom individual living beings are given life and perfected. Specifically, according to a commentary by scholar Spyridon Rangos, Diana (equated with Hecate) gives existence, Proserpine (equated with "Soul") gives form, and Minerva (equated with "Virtue") gives intellect. In his commentary on Proclus, the 19th century Platonist scholar Thomas Taylor expanded upon the theology of the classical philosophers, further interpreting the nature and roles of the gods in light of the whole body of Neoplatonist philosophy. He cites Plato in giving a three-form aspect to her central characteristic of virginity: the undefiled, the mundane, and the anagogic. Through the first form, Diana is regarded as a "lover of virginity". Through the second, she is the guardian of virtue. Through the third, she is considered to "hate the impulses arising from generation." Through the principle of the undefiled, Taylor suggests that she is given supremacy in Proclus' triad of life-giving or animating deities, and in this role the theurgists called her Hekate. In this role, Diana is granted undefiled power ("Amilieti") from the other gods. This generative power does not proceed forth from the goddess (according to a statement by the Oracle of Delphi) but rather resides with her, giving her unparalleled virtue, and in this way she can be said to embody virginity. Later commentators on Proclus have clarified that the virginity of Diana is not an absence of sexual drive, but a renunciation of sexuality. Diana embodies virginity because she is the cause of fertile things, so logically, she herself cannot be fertile (within Neoplatonism, an important maxim is that "every productive cause is superior to the nature of the produced effect"). Using the ancient Neoplatonists as a basis, Taylor also commented on the triadic nature of Diana and related goddesses, and the ways in which they subsist within one another, partaking unevenly in each other's powers and attributes. For example, Kore is said to embody both Diana/Hecate and Minerva, who create the virtuous or virgin power within her, but also Proserpine (her sole traditional identification), through whom the generative power of the Kore as a whole is able to proceed forth into the world, where it joins with the demiurge to produce further deities, including Bacchus and "nine azure-eyed, flower-producing daughters". Proclus also included Artemis/Diana in a second triad of deities, along with Ceres and Juno. According to Proclus: Proclus pointed to the conflict between Hera and Artmeis in the "Illiad" as a representation of the two kinds of human souls. Where Hera creates the higher, more cultured, or "worthy" souls, Artemis brings light to and perfects the "less worthy" or less rational. As explained by Ragnos (2000), "The aspect of reality which Artemis and Hera share, and because of which they engage in a symbolic conflict, is the engendering of life." Hera elevates rational living beings up to intellectual rational existence, whereas Artemis's power pertains to human life as far as its physical existence as a living thing. "Artemis deals with the most elementary forms of life or the most elementary part of all life, whereas Hera operates in the most elevated forms of life or the most elevated part of all life. Sermons and other religious documents have provided evidence for the worship of Diana during the Middle Ages. Though few details have been recorded, enough references to Diana worship during the early Christian period exist to give some indication that it may have been relatively widespread among remote and rural communities throughout Europe, and that such beliefs persisted into the Merovingian period. References to contemporary Diana worship exist from the 6th century on the Iberian peninsula and what is now southern France, though more detailed accounts of Dianic cults were given for the Low Countries, and southern Belgium in particular. Many of these were probably local goddesses, and wood nymphs or dryads, which had been conflated with Diana by Christian writers Latinizing local names and traditions. The 6th century bishop Gregory of Tours reported meeting with a deacon named Vulfilaic (also known as Saint Wulflaicus or Walfroy the Stylite), who founded a hermitage on a hill in what is now Margut, France. On the same hill, he found "an image of Diana which the unbelieving people worshiped as a god." According to Gregory's report, worshipers would also sing chants in Diana's honor as they drank and feasted. Vulfilaic destroyed a number of smaller pagan statues in the area, but the statue of Diana was too large. After converting some of the local population to Christianity, Vulfilaic and a group of local residents attempted to pull the large statue down the mountain in order to destroy it, but failed, as it was too large to be moved. In Vulfilaic's account, after praying for a miracle, he was then able to single-handedly pull down the statue, at which point he and his group smashed it to dust with their hammers. According to Vulfilaic, this incident was quickly followed by an outbreak of pimples or sores that covered his entire body, which he attributed to demonic activity and similarly cured via what he described as a miracle. Vulfilaic would later found a church on the site, which is today known as Mont Saint-Walfroy. Additional evidence for surviving pagan practices in the Low Countries region comes from the "Vita Eligii", or "Life of Saint Eligius", written by Audoin in the 7th century. Audoin drew together the familiar admonitions of Eligius to the people of Flanders. In his sermons, he denounced "pagan customs" that the people continued to follow. In particular, he denounced several Roman gods and goddesses alongside Druidic mythological beliefs and objects: "I denounce and contest, that you shall observe no sacrilegious pagan customs. For no cause or infirmity should you consult magicians, diviners, sorcerers or incantators. ..Do not observe auguries ... No influence attaches to the first work of the day or the [phase of the] moon. ... [Do not] make vetulas, little deer or iotticos or set tables at night or exchange New Year gifts or supply superfluous drinks... No Christian... performs solestitia or dancing or leaping or diabolical chants. No Christian should presume to invoke the name of a demon, not Neptune or Orcus or Diana or Minerva or Geniscus... No one should observe Jove's day in idleness. ... No Christian should make or render any devotion to the gods of the trivium, where three roads meet, to the fanes or the rocks, or springs or groves or corners. None should presume to hang any phylacteries from the neck of man nor beast. ..None should presume to make lustrations or incantations with herbs, or to pass cattle through a hollow tree or ditch ... No woman should presume to hang amber from her neck or call upon Minerva or other ill-starred beings in their weaving or dyeing. .. None should call the sun or moon lord or swear by them. .. No one should tell fate or fortune or horoscopes by them as those do who believe that a person must be what he was born to be." Legends from medieval Belgium concern a natural spring which came to be known as the "Fons Remacli", a location which may have been home to late-surviving worship of Diana. Remacle was a monk appointed by Eligius to head a monastery at Solignac, and he is reported to have encountered Diana worship in the area around the river Warche. The population in this region was said to have been involved in the worship of "Diana of the Ardennes" (a syncretism of Diana and the Celtic goddess Arduinna), with effigies and "stones of Diana" used as evidence of pagan practices. Remacle believed that demonic entities were present in the spring, and had caused it to run dry. He performed and exorcism of the water source, and installed a lead pipe, which allowed the water to flow again. Diana is the only pagan goddess mentioned by name in the New Testament (Acts 19). As a result, she became associated with many folk beliefs involving goddess-like supernatural figures that Catholic clergy wished to demonize. In the Middle Ages, legends of night-time processions of spirits led by a female figure are recorded in the church records of Northern Italy, western Germany, and southern France. The spirits were said to enter houses and consume food which then miraculously re-appeared. They would sing and dance, and dispense advise regarding healing herbs and the whereabouts of lost objects. If the house was in good order, they would bring fertility and plenty. If not, they would bring curses to the family. Some women reported participating in these processions while their bodies still lay in bed. Historian Carlo Ginzburg has referred to these legendary spirit gatherings as "The Society of Diana". Local clergy complained that women believed they were following Diana or Herodias, riding out on appointed nights to join the processions or carry out instructions from the goddess. The earliest reports of these legends appear in the writings of Regino of Prüm in the year 899, followed by many additional reports and variants of the legend in documents by Ratherius and others. By 1310, the names of the goddess figures attached to the legend were sometimes combined as Herodiana. It is likely that the clergy of this time used the identification of the procession's leader as Diana or Herodias in order to fit an older folk belief into a Biblical framework, as both are featured and demonized in the New Testament. Herodias was often conflated with her daughter Salome in legend, which also holds that, upon being presented with the severed head of John the Baptist, she was blown into the air by wind from the saint's mouth, through which she continued to wander for eternity. Diana was often conflated with Hecate, a goddess associated with the spirits of the dead and with witchcraft. These associations, and the fact that both figures are attested to in the Bible, made them a natural fit for the leader of the ghostly procession. Clergy used this identification to assert that the spirits were evil, and that the women who followed them were inspired by demons. As was typical of this time period, though pagan beliefs and practices were near totally eliminated from Europe, the clergy and other authorities still treated paganism as a real threat, in part thanks to biblical influence; much of the Bible had been written when various forms of paganism were still active if not dominant, so medieval clergy applied the same kinds of warnings and admonitions for any non-standard folk beliefs and practices they encountered. Based on analysis of church documents and parishioner confessions, it is likely that the spirit identified by the Church as Diana or Herodias was called by names of pre-Christian figures like Holda (a Germanic goddess of the winter solstice), or with names referencing her bringing of prosperity, like the Latin Abundia (meaning "plenty"), Satia (meaning "full" or "plentiful") and the Italian Richella (meaning "rich"). Some of the local titles for her, such as "bonae res" (meaning "good things"), are similar to late classical titles for Hecate, like "bona dea". This might indicate a cultural mixture of medieval folk ideas with holdovers from earlier pagan belief systems. Whatever her true origin, by the 13th century, the leader of the legendary spirit procession had come to be firmly identified with Diana and Herodias through the influence of the Church. In his wide-ranging, comparative study of mythology and religion, "The Golden Bough", anthropologist James George Frazer drew on various lines of evidence to re-interpret the legendary rituals associated with Diana at Nemi, particularly that of the "rex Nemorensis". Frazer developed his ideas in relation to J. M. W. Turner's painting, also titled "The Golden Bough", depicting a dream-like vision of the woodland lake of Nemi. According to Frazer, the "rex Nemorensis" or king at Nemi was the incarnation of a dying and reviving god, a solar deity who participated in a mystical marriage to a goddess. He died at the harvest and was reincarnated in the spring. Frazer claimed that this motif of death and rebirth is central to nearly all of the world's religions and mythologies. In Frazer's theory, Diana functioned as a goddess of fertility and childbirth, who, assisted by the sacred king, ritually returned life to the land in spring. The king in this scheme served not only as a high priest but as a god of the grove. Frazer identifies this figure with Virbius, of which little is known, but also with Jupiter via an association with sacred oak trees. Frazer argued furthermore that Jupiter and Juno were simply duplicate names of Jana and Janus; that is, Diana and Dianus, all of whom had identical functions and origins. Frazer's speculatively reconstructed folklore of Diana's origins and the nature of her cult at Nemi were not well received even by his contemporaries. Godfrey Lienhardt noted that even during Frazer's lifetime, other anthropologists had "for the most part distanced themselves from his theories and opinions", and that the lasting influence of "The Golden Bough" and Frazer's wider body of work "has been in the literary rather than the academic world." Robert Ackerman wrote that, for anthropologists, Frazer is "an embarrassment" for being "the most famous of them all" and that most distance themselves from his work. While "The Golden Bough" achieved wide "popular appeal" and exerted a "disproportionate" influence "on so many [20th century] creative writers", Frazer's ideas played "a much smaller part" in the history of academic social anthropology. Folk legends like the Society of Diana, which linked the goddess to forbidden gatherings of women with spirits, may have influenced later works of folklore. One of these is Charles Godfrey Leland's "Aradia, or the Gospel of the Witches", which prominently featured Diana at the center of an Italian witch-cult. In Leland's interpretation of supposed Italian folk witchcraft, Diana is considered Queen of the Witches. In this belief system, Diana is said to have created the world of her own being having in herself the seeds of all creation yet to come. It was said that out of herself she divided the darkness and the light, keeping for herself the darkness of creation and creating her brother Lucifer. Diana was believed to have loved and ruled with her brother, and with him bore a daughter, Aradia (a name likely derived from Herodias), who leads and teaches the witches on earth. Leland's claim that "Aradia" represented an authentic tradition from an underground witch-cult, which had secretly worshiped Diana since ancient times has been dismissed by most scholars of folklore, religion, and medieval history. After the 1921 publication of Margaret Murray's "The Witch-cult in Western Europe", which hypothesized that the European witch trials were actually a persecution of a pagan religious survival, American sensationalist author Theda Kenyon's 1929 book "Witches Still Live" connected Murray's thesis with the witchcraft religion in "Aradia". Arguments against Murray's thesis would eventually include arguments against Leland. Witchcraft scholar Jeffrey Russell devoted some of his 1980 book "A History of Witchcraft: Sorcerers, Heretics and Pagans" to arguing against the claims Leland presented in "Aradia". Historian Elliot Rose's "A Razor for a Goat" dismissed "Aradia" as a collection of incantations unsuccessfully attempting to portray a religion. In his book "Triumph of the Moon", historian Ronald Hutton doubted not only of the existence of the religion that "Aradia" claimed to represent, and that the traditions Leland presented were unlike anything found in actual medieval literature, but also of the existence of Leland's sources, arguing that it is more likely that Leland created the entire story than that Leland could be so easily "duped". Religious scholar Chas S. Clifton took exception to Hutton's position, writing that it amounted to an accusation of "serious literary fraud" made by an "argument from absence". Building on the work of Frazer, Murray, and others, some 20th and 21st century authors have attempted to identify links between Diana and more localized deities. R. Lowe Thompson, for example, in his 2013 book "The History of the Devil", speculated that Diana may have been linked as an occasional "spouse" to the Gaulish horned god Cernunnos. Thompson suggested that Diana in her role as wild goddess of the hunt would have made a fitting consort for Cernunnos in Western Europe, and further noted the link between Diana as Proserpina with Pluto, the Greek god associated with the riches of the earth who served a similar role to the Gaulish Cernunnos. Because Leland's claims about an Italian witch-cult are questionable, the first verifiable worship of Diana in the modern age was probably begun by Wicca. The earliest known practitioners of Neopagan witchcraft were members of a tradition begun by Gerald Gardner. Published versions of the devotional materials used by Gardner's group, dated to 1949, are heavily focused on the worship of Aradia, the daughter of Diana in Leland's folklore. Diana herself was recognized as an aspect of a single "great goddess" in the tradition of Apuleius, as described in the Wiccan Charge of the Goddess (itself adapted from Leland's text). Some later Wiccans, such as Scott Cunningham, would replace Aradia with Diana as the central focus of worship. In the early 1960s, Victor Henry Anderson founded the Feri Tradition, a form of Wicca that draws from both Charles Leland's folklore and the Gardnerian tradition. Anderson claimed that he had first been initiated into a witchcraft tradition as a child in 1926, and that he had been told the name of the goddess worshiped by witches was Tana. The name Tana originated in Leland's "Aradia", where he claimed it was an old Etruscan name for Diana. The Feri Tradition founded by Anderson continues to recognize Tana/Diana as an aspect of the Star Goddess related to the element of fire, and representing "the fiery womb that gives birth to and transforms all matter." (In "Aradia", Diana is also credited as the creatrix of the material world and Queen of Faeries). A few Wiccan traditions would elevate Diana to a more prominent position of worship, and there are two distinct modern branches of Wicca focused primarily on Diana. The first, founded during the early 1970s in the United States by Morgan McFarland and Mark Roberts, has a feminist theology and only occasionally accepts male participants, and leadership is limited to female priestesses. McFarland Dianic Wiccans base their tradition primarily on the work of Robert Graves and his book "The White Goddess", and were inspired by references to the existence of medieval European "Dianic cults" in Margaret Murray's book "The Witch-Cult in Western Europe". The second Dianic tradition, founded by Zsuzsanna Budapest in the mid 1970s, is characterized by an exclusive focus on the feminine aspect of the divine, and as a result is exclusively female. This tradition combines elements from British Traditional Wicca, Italian folk-magic based on the work of Charles Leland, feminist values, and healing practices drawn from a variety of different cultures. A third Neopagan tradition heavily inspired by the worship of Diana through the lens of Italian folklore is Stregheria, founded in the 1980s. It centers around a pair of deities regarded as divine lovers, who are known by several variant names including Diana and Dianus, alternately given as Tana and Tanus or Jana and Janus (the later two deity names were mentioned by James Frazer in "The Golden Bough" as later corruptions of Diana and Dianus, which themselves were alternate and possibly older names for Juno and Jupiter). The tradition was founded by author Raven Grimassi, and influenced by Italian folktales he was told by his mother. One such folktale describes the moon being impregnated by her lover the morning star, a parallel to Leland's mythology of Diana and her lover Lucifer. Diana was also a subject of worship in certain Feraferian rites, particularly those surrounding the autumnal equinox, beginning in 1967. Both the Romanian words for "fairy" "Zână" and Sânziană, the Leonese and Portuguese word for "water nymph" "xana", and the Spanish word for "shooting target" and "morning call" ("diana") seem to come from the name of Diana. Since the Renaissance, Diana's myths have often been represented in the visual and dramatic arts, including the opera "L'arbore di Diana". In the 16th century, Diana's image figured prominently at the châteaus of Fontainebleau, Chenonceau, & at Anet, in deference to Diane de Poitiers, mistress of Henri of France. At Versailles she was incorporated into the Olympian iconography with which Louis XIV, the Apollo-like "Sun King" liked to surround himself. Diana is also a character in the 1876 Léo Delibes ballet "Sylvia". The plot deals with Sylvia, one of Diana's nymphs and sworn to chastity, and Diana's assault on Sylvia's affections for the shepherd Amyntas. Diana has been one of the most popular themes in art. Painters like Titian, Peter Paul Rubens, François Boucher, Nicholas Poussin and made use of her myth as a major theme. Most depictions of Diana in art featured the stories of Diana and Actaeon, or Callisto, or depicted her resting after hunting. Some famous work of arts with a Diana theme are:
https://en.wikipedia.org/wiki?curid=8391
Danny Elfman Daniel Robert Elfman (born May 29, 1953) is an American composer, singer, songwriter, record producer, actor, and voice actor. He first became well known as the singer-songwriter for the new wave band Oingo Boingo in the early 1980s, and has since garnered international recognition for writing over 100 feature film scores, as well as compositions for television, stage productions, and the concert hall. Elfman has frequently worked with directors Tim Burton, Sam Raimi, and Gus Van Sant, with notable achievements the scores for 16 Burton-directed films including "Batman," "Edward Scissorhands, Alice in Wonderland", and "Dumbo"; Raimi's "Spider-Man", "Spider-Man 2", and "Oz the Great and Powerful"; and Van Sant's Academy Award-nominated films "Good Will Hunting" and "Milk". He wrote music for all of the "Men in Black" and "Fifty Shades of Grey" franchise films, the songs and score for the Burton-produced animated musical "The Nightmare Before Christmas", and the themes for the popular television series "Desperate Housewives" and "The Simpsons". Among his honors are four Oscar nominations, two Emmy Awards, a Grammy, six Saturn Awards for Best Music, the 2002 Richard Kirk Award, the 2015 Disney Legend Award, and the Max Steiner Film Music Achievement Award in 2017. Elfman was born on May 29, 1953, in Los Angeles, California, to a Jewish family with Polish and Russian roots. He is the son of Blossom Elfman (née Bernstein), a writer and teacher, and Milton Elfman, a teacher who was in the Air Force. Elfman was raised in a racially mixed affluent community in Baldwin Hills, California, where he spent much of his time at the local movie theater discovering classic sci-fi, fantasy and horror films and first noticed the music of such film composers as Bernard Herrmann and Franz Waxman. In his early school days, Elfman exhibited an aptitude for science with almost no interest in music, and was even rejected from elementary school orchestra "for having no propensity for music." This would change when he switched high schools in the late 1960s and fell in with a musical crowd, who introduced him to early jazz and the work of Stravinsky and his 20th century contemporaries. After finishing high school early with plans to travel the world, Elfman followed his brother Richard to France, where he performed violin with Jérôme Savary's Le Grand Magic Circus, an avant-garde musical theater group. He then embarked on a ten-month, self-guided tour through Africa, busking and collecting a range of West African percussion instruments until a series of illnesses forced him to return home. At this time, Richard was forming a new musical theater group in Los Angeles. While Elfman was never officially a student at CalArts, an instructor in the Indonesian music department encouraged him to attend classes and perform music there for two years. After returning to Los Angeles from Africa in the early 1970s, Elfman was asked by his brother Richard to serve as musical director of his street theatre performance art troupe The Mystic Knights of the Oingo Boingo. Elfman was tasked with adapting and arranging 1920s and 1930s jazz and big band music by artists such as Cab Calloway, Duke Ellington, Django Reinhardt and Josephine Baker for the ensemble, which consisted of up to 15 performers swapping 30 instruments. He also composed original pieces and helped build instruments unique for the group, including an aluminum gamelan, the 'Schlitz celeste' made from tuned beer cans, and a "junkyard orchestra" built from car parts and trash cans. The Mystic Knights performed on the street and in nightclubs throughout Los Angeles until Richard left in 1979 to pursue filmmaking. As a send-off to the group's original concept, Richard created the film "Forbidden Zone" based on The Mystic Knights' stage performances. Elfman composed the songs and his first score for the film, and appeared as the character Satan, who performs a reworked version of Calloway's "Minnie the Moocher" with ensemble members playing backup as henchmen. Before the release of "Forbidden Zone", Elfman had taken over The Mystic Knights as lead singer-songwriter in 1979, paring the group down to eight players, shortening the name to Oingo Boingo, and recording and touring as a ska-influenced new wave band. Their biggest success among eight studio albums penned by Elfman was 1985's "Dead Man's Party", featuring the hit song "Weird Science" from the movie of the same name. The band also appeared performing their single "Dead Man's Party" in the 1986 movie "Back to School", for which Elfman also composed the score. Elfman shifted the band to a more guitar-oriented rock sound in the late 1980s, which continued through their last album "Boingo" in 1994. Citing permanent hearing damage from performing live and conflicts with his film-scoring career, Elfman retired Oingo Boingo in 1995 with a series of five sold-out final concerts at the Universal Amphitheatre ending on Halloween night. On October 31, 2015, Elfman and Oingo Boingo guitarist Steve Bartek performed the song "Dead Man's Party" with an orchestra as an encore to a live-to-film concert of "The Nightmare Before Christmas" score at the Hollywood Bowl. Elfman told the audience the performance was "20 years to the day" of Oingo Boingo's retirement. As fans of Oingo Boingo and The Mystic Knights respectively, Tim Burton and Paul Reubens invited Elfman to write the score for their first feature film "Pee-wee's Big Adventure" in 1985. Elfman was initially apprehensive because of his lack of formal training and having never scored a studio feature, but after Burton accepted his initial demo of the title music and with orchestration assistance from Oingo Boingo guitarist and arranger Steve Bartek, he completed his score to great effect, while paying homage to his love of early film music and influential film composers Nino Rota and Bernard Herrmann. Elfman described the first time he heard his music played by a full orchestra as one of the most thrilling experiences of his life. Following "Pee Wee's Big Adventure", Elfman scored mainly quirky comedies in the late 1980s, including "Back to School" starring Rodney Dangerfield, Burton's "Beetlejuice" and the Bill Murray vehicle "Scrooged". Notable exceptions were the all-synth score to Emilio Estevez's crime drama "Wisdom" and the big band, blues-infused music for Martin Brest's buddy cop action film "Midnight Run". In 1989, Elfman's influential, Grammy-winning score for Burton's "Batman" marked a major stylistic shift to dark, densely orchestrated music in the romantic idiom, which would carry over to his scores for Warren Beatty's "Dick Tracy", Sam Raimi's "Darkman" and Clive Barker's "Nightbreed", all released in 1990. With "Batman", Elfman firmly established a career-spanning relationship with Burton, scoring all but three of the director's major studio releases. Highlights include "Edward Scissorhands" (1990), "Batman Returns" (1992), "Sleepy Hollow" (1999), "Big Fish" (2003) and "Alice in Wonderland" (2010). In 2005 he wrote the score and songs for Burton's "Corpse Bride" and provided the voice of the character of Bonejangles, as well as providing the score, songs and Oompa-Loompa vocals for Burton's "Charlie and the Chocolate Factory" that same year. In addition to writing the score and ten songs for the Burton-produced stop motion animated film "The Nightmare Before Christmas", Elfman also provided the singing voice for main character Jack Skellington, as well as the voices for side characters Barrel and the Clown with the Tear-Away Face. In addition to frequent collaborations with Burton, Raimi and Gus Van Sant, Elfman has worked with esteemed directors such as Brian De Palma, Peter Jackson, Joss Whedon, Errol Morris, Ang Lee, Richard Donner, Guillermo del Toro, David O. Russell, Taylor Hackford, Jon Amiel, Joe Johnston, and Barry Sonnenfeld. His scores for Sonnenfeld's "Men in Black", Van Sant's "Good Will Hunting" and "Milk", and Burton's "Big Fish" all received Academy Award nominations. Since the mid 1990s, Elfman has expanded his craft to a range of genres, including thrillers ("Dolores Claiborne", "A Simple Plan", "The Kingdom"), dramas ("Sommersby", "A Civil Action", "Hitchcock"), indies ("Freeway", "Silver Linings Playbook", "Don't Worry, He Won't Get Far on Foot"), family ("Flubber", "Charlotte's Web", "Frankenweenie", "Goosebumps"), documentary ("Standard Operating Procedure", "The Unknown Known"), and straight horror ("Red Dragon", "The Wolfman"), as well as notable entries in his well-established areas of horror comedy ("The Frighteners", "Mars Attacks!", "Dark Shadows") and comic book-inspired action films ("Hulk", "Wanted", "", ""). Among his franchise work, Elfman composed the scores for all four "Men in Black" films (1997–2019) and all three "Fifty Shades of Grey" films (2015–2018). Elfman scored Raimi's "Spider-Man" in 2002 and "Spider-Man 2" in 2004, themes and selections from which were used for Raimi's "Spider-Man 3", though Elfman did not compose the score. In 1996, he also provided the score for the first film in the , adapting themes for the original television series by Lalo Schifrin as well as composing his own. For several high-profile sequel and reboot projects in the 2010s, Elfman incorporated established musical themes with his own original thematic material, including the DC Extended Universe's "Justice League", "The Grinch," "Dumbo" and . Elfman was featured in the 2016 documentary "Score", in which he appeared among over 50 film composers to discuss the craft of movie music and influential figures in the business. Elfman's first piece of original concert music, "Serenada Schizophrana", was commissioned by the American Composers Orchestra, who premiered the piece on February 23, 2005, at Carnegie Hall. Subsequent concert works include his first "Violin Concerto "Eleven Eleven"", co-commissioned by the Czech National Symphony Orchestra, Stanford Live at Stanford University, and the Royal Scottish National Orchestra, which premiered at Smetana Hall in Prague on June 21, 2017, with Sandy Cameron on violin and John Mauceri conducting the Czech National Symphony Orchestra; the "Piano Quartet", co-commissioned by the Lied Center for Performing Arts University of Nebraska and the Berlin Philharmonic Piano Quartet, which premiered February 6, 2018, in Lincoln, Nebraska; and the "Percussion Quartet", commissioned by Third Coast Percussion and premiered at the Philip Glass Days And Nights Festival in Big Sur on October 10, 2019. In 2008, Elfman accepted his first commission for the stage, composing the music for Twyla Tharp's "Rabbit and Rogue" ballet, co-commissioned by American Ballet Theatre and Orange County Performing Arts Center and premiering on June 3, 2008, at the Metropolitan Opera House, Lincoln Center. Other works for stage include the music for Cirque Du Soleil's "Iris" in 2011, and incidental music for the Broadway production of Taylor Mac's "" in 2019. In October 2013, Elfman returned to the stage for the first time since his band Oingo Boingo disbanded to sing his vocal parts to a handful of "The Nightmare Before Christmas" songs as part of a concert titled "Danny Elfman's Music from the Films of Tim Burton", featuring suites of music from 15 Tim Burton films newly arranged by Elfman. The concert has since toured internationally and has played in Japan, Australia, Mexico and throughout Europe and the United States. Since 2015, Elfman has appeared near annually in a Hollywood Bowl Halloween concert featuring full orchestra performing the "Nightmare Before Christmas" score live to the film projection. In 2019 it was announced Elfman had been commissioned to write a piece for the National Youth Orchestra of Great Britain set to premiere in 2020, and a percussion concerto for Colin Currie and the London Philharmonic Orchestra for spring 2021. Other works in the planning phase are a cello concerto and a project that involves chamber orchestra and Elfman's own voice. It was announced that Elfman would be taking part in Coachella 2020 with a set titled "Past, Present and Future! From Boingo to Batman and Beyond!" Elfman clarified on his Instagram page that this would not be an Oingo Boingo reunion, writing "I’m creating a live mix of my last 40 years— both film music and songs... that includes my Boingo years, my composer years and a few things I’ve been working on for the last year or so, which will be world premieres." In addition to his music for film, Elfman has had a prolific career in television, penning themes for "The Simpsons", "Tales from the Crypt", "The Flash" and "Desperate Housewives", which won Elfman his first Emmy. He also adapted his original themes for the animated versions of "" and "Beetlejuice". Occasional forays into serial television include episodes of "Alfred Hitchcock Presents", "Amazing Stories" and Pee-wee's Playhouse, as well as the miniseries "When We Rise", co-composed with Chris Bacon. He has composed music for animated shorts, including Sally Cruikshank's "Face Like A Frog" and Tim Burton's "Stainboy" internet series. Elfman provided background music for Luigi Serafini's solo exhibition "il Teatro della Pittura" at the Fondazione Mudima di Milano in Milan, Italy in 1998 and for the "Tim Burton" exhibition at MoMA in 2009. In the 1990s, Elfman composed music for advertising campaigns for Nike, Nissan and Lincoln-Mercury, and in 2002 wrote the music for Honda's "Power of Dreams" advertising campaign, which was the first cinema commercial to be shot in the IMAX format. In 2013 he composed the music and provided the English-language vocals for the Hong Kong Disneyland attraction Mystic Manor. On October 31, 2019, the MasterClass online educational series released "Making Music out of Chaos," presenting 21 compositional and career lessons from Elfman's four decades of experience primarily in the film industry. Elfman has said his major influences are composers from Hollywood's Golden Age, such as Bernard Herrmann, Dimitri Tiomkin, Max Steiner, David Tamkin, Erich Korngold and Carl Stalling; 20th century classical composers Sergei Prokofiev, Igor Stravinsky, Béla Bartók, Dmitri Shostakovich, and Carl Orff; and jazz, experimental and minimalist composers Kurt Weill, Duke Ellington, Harry Partch, Philip Glass, Lou Harrison, Terry Riley and Steve Reich. Influences on specific scores include Erik Satie ("Forbidden Zone)", Nino Rota ("Pee-wee's Big Adventure"), George Gershwin ("Dick Tracy"), Pyotr Ilyich Tchaikovsky ("Edward Scissorhands"), and Jimi Hendrix ("Dead presidents"). Though not considered direct influences "per se", Elfman has discussed his respect and admiration for film composers Jerry Goldsmith, Ennio Morricone, Thomas Newman, Alexandre Desplat and John Williams, as well as classical composer John Adams. Though many believe Richard Wagner informed his influential score to "Batman", Elfman has said it was more likely from Wagner's influence on classic film composers such as Herrmann, Steiner, Waxman and Korngold, as he was unfamiliar with Wagner's work at the time. Elfman counts Herrmann as his biggest influence, and has said hearing Herrmann's score to "The Day the Earth Stood Still" when he was a child was the first time he recognized film music as a cinematic artform and realized the powerful contribution a composer makes to the movies. Pastiche of Herrmann's music can be heard in Elfman's "Pee-Wee's Big Adventure," especially the cues "Stolen Bike" and "Clown Attack", which directly reference Herrmann's music from "Psycho" and "7th Voyage of Sinbad" respectively. His score to "Batman" makes more subtle nods to Herrmann's "Journey to the Center of the Earth" and "Vertigo", and more integral homage can be heard in later scores for "Mars Attacks!" and "Hitchcock," as well as the "Blue Strings" movement of his first concert work "Serenada Schizophrana". While Elfman is primarily known for writing large-scale orchestral works in the romantic, 20th century and Hollywood Golden Age film score traditions, his compositions have used a wide range of idioms, including rock and blues ("Midnight Run, Hot to Trot"), big band and jazz ("Dick Tracy", "Chicago"), operetta ("The Nightmare Before Christmas", "Corpse Bride"), funk and hip hop ("Dead presidents", "Notorious"), folk and indie rock ("Taking Woodstock", "Silver Linings Playbook"), Americana ("Article 99", "Sommersby", "Big Fish)," minimalism ("Good Will Hunting", "Standard Operating Procedure", "The Unknown Known"), and atonal or experimental ("Freeway", "A Simple Plan, The Girl on the Train"). Given his appreciation and study of world music and his vast collection of instruments from non-Western cultures, Elfman will often use traditional instruments in his scores when there is an international setting, such as African percussion for "Instinct," the oud for "The Kingdom" set in Saudi Arabia, and pan flute for "Proof of Life" set in South America. When working on films with established musical identifiers, Elfman will often incorporate original themes in addition to his own thematic material. Examples include Lalo Schifrin's main theme and "The Plot" from the original for ""; John Williams' theme for "Superman", the Hans Zimmer/Junkie XL theme for "Wonder Woman" and his own original "Batman" theme for "Justice League;" the "Welcome Christmas" song from the 1966 "How the Grinch Stole Christmas!" for "The Grinch"; and "Casey Junior," "Pink Elephants on Parade," and "When I See an Elephant Fly" from Disney's original 1941 animated film for "Dumbo". Even when not directly quoting themes from related films, Elfman will often pay homage through established musical gesture or tonality, for example Howard Shore's "The Silence of the Lambs" for "Red Dragon", Brad Fiedel's music for the "Terminator" franchise for "Terminator Salvation", Robert Cobert's original television series music for "Dark Shadows," and Alan Silvestri's work on "The Avengers" for the sequel "". Notable exceptions are Tim Burton's "Batman", "Planet of the Apes" and "Charlie and the Chocolate Factory," which do not make musical reference to pre existing material. Elfman's work in pop music and specifically as songwriter for Oingo Boingo was influenced by The Specials, Madness, the Selecter, and XTC. The songs for "The Nightmare Before Christmas" and "Corpse Bride" were overarchingly influenced by Kurt Weill, Gilbert and Sullivan and early Rodgers and Hammerstein, whereas the songs in "Charlie and the Chocolate Factory" were influenced by Bollywood, The Mamas and the Papas, Earth, Wind & Fire, ABBA, and Queen individually. For his film scores, Elfman draws musical inspiration almost exclusively from viewing a cut of the film, and occasionally from visits to the set while the film is in production (he famously wrote and orchestrated his theme for "Batman" on an airplane to Los Angeles after visiting the set in London). While he prefers not to work from script, story or concept, notable exceptions are "The Nightmare Before Christmas," for which ten songs needed to be written in advance of filmmaking, and "Dumbo," for which he composed the main theme before filming began. Once a rough cut of the film is ready, Elfman and the director have a spotting session to decide where to place music in the film, the emotional undercurrents of each scene, and overall tone. Elfman then spends a few weeks of free composition and experimentation to begin working out thematic material and to develop sounds and the harmonic pallette. When he has received approval on initial material from the filmmakers, Elfman begins to compose anywhere from 60 to 120 minutes of music cue-by-cue. He says two of the most important things to capture at this point are the tone of each scene and editorial rhythm. Next to thematic development, action set pieces tend to take Elfman the most time given the complexity of timing music to action. One element where Elfman's compositional process deviates from most film composers is that he will often compose three or more often radically different versions of a single cue to give the director more options for musicalizing a scene. Early in his career, he wrote out his scores using pencil, but has composed largely digitally since the mid-1990s. Before recording the score, he demos each cue by mocking orchestral and choral parts on synthesizer to get approval from the director. Once approved, he provides a detailed, multi-line sketch of his composition to his lead orchestrator Steve Bartek, who ensures the sketches are appropriately broken down for sections of the orchestra (i.e. string, brass woodwind, some percussion), choir (SATB) and individual players. Elfman also typically samples or records his own percussion and guitar playing to overlay with live orchestra. More than half of some scores feature Elfman's performance, including "Dead presidents", "", "Planet of the Apes", "The Kingdom", "The Girl on the Train" and "The Circle." To produce the score, Elfman rents a recording studio and hires a conductor and orchestra/choir. He oversees the recording from the control booth so that he can troubleshoot with the film's director and recording engineers. The final recording is given to the film's sound department to mix with dialogue and sound effects for the film's complete soundtrack. Elfman will usually do a separate mix of select cues for an album presentation of the score, and has produced nearly 100 to date. On the occasion that there are compressed deadlines or in the event he is not available to rescore or adapt his music if there are major edits to the film after the score's completion, Elfman will hire additional composers to work on small cues or sections of cues, adapting his existing material or themes. Examples include Jonathan Sheffer on "Darkman", David Buckley on the "Fifty Shades of Grey" films, and Pinar Toprak on "Justice League." Since the 1990s, Elfman has occasionally co-composed music or shared music writing credit (e.g."When We Rise", "Spy Kids", ", "), or written themes that are then used or adapted by other composers, including Jonathan Sheffer ("Pure Luck"), Steve Bartek ("Novacaine"), John Debney ("Heartbreakers)," Deborah Lurie ("9"), and The Newton Brothers ("Before I Wake"). In the liner notes for the 2006 CD recording of his first concert work "Serenada Schizophrana", Elfman wrote: "I began composing several dozen short improvisational compositions, maybe a minute each. Slowly, some of them began to develop themselves until finally I had six separate movements that, in some abstract, absurd way, felt connected." To create the cadenzas for his violin concerto "Eleven Eleven", Elfman collaborated with soloist Sandy Cameron, for whom the piece was written. Elfman often incorporates choral or vocal arrangements into his film scores, notably the use of women's and children's choirs ("Scrooged", "Nightbreed, Edward Scissorhands, Batman Returns", "Sleepy Hollow, Alice in Wonderland", "The Grinch"), and solo voice or vocal effects ("Beetlejuice", "Mars Attacks!", "Men in Black II", "Flubber", "Nacho Libre", "Iris", "Dark Shadows", "The Girl on the Train"). Evoking the "O Fortuna" from Carl Orff's "Carmina Burana", he set made-up, Latin-sounding text for SATB choir in standout cue "Descent into Mystery" from "Batman." Elfman also adds his own vocals into compositions in much the same way he mixes his percussion and guitar performances into orchestral arrangements. Prominent use can be heard in the scores for "To Die For" (sung with director Gus Van Sant, credited to "Little Gus and the Suzettes"), "Silver Linings Playbook", and his music for the Hong Kong Disneyland ride Mystic Manor. He provided the singing voice for characters in "The Nightmare Before Christmas" and "Corpse Bride" in addition to composing the scores and songs"," and can be heard singing the "Day-O" call in the style of Harry Belafonte's "Banana Boat Song" in the first bars of the "Beetlejuice" main title. For Tim Burton's "Charlie and the Chocolate Factory," Elfman set Roald Dahl's text for the Oompa-Loompa characters as four stylistically distinct songs: the Bollywood-influenced "Augustus Gloop," the funk-infused "Violet Beauregarde," the psychedelic pop stylings of "Veruca Salt," and the baroque rock of "Mike Teevee." For all songs in the film, Elfman sang, manipulated and mixed several layers of his vocals to create the singing voices and harmonies of the Oompa Loompas, and incorporated his vocals into non-song score tracks that featured the characters, including "Loompa Land," "Chocolate River," "The Boat Arrives," and "The River Cruise." Unique among film composers, Elfman typically writes the lyrics to songs he has composed for movies. He employs song structures from Tin Pan Alley and early musical theatre composers (32-bar form), and pop and rock of the 1950s and 1960s (verse-chorus). As his songs serve to advance the plot and develop characters, lyrics reflect storylines and imagery specific to the film and express the inner life of characters. A major achievement was writing the lyrics and music for ten songs featured in the stop-motion musical "The Nightmare Before Christmas". Drawing from Tim Burton's parody poem of "A Visit from St. Nicholas" and concept drawings, Elfman wrote each song in consultation with Burton before the film even had a script. These include the full-cast songs "This Is Halloween," "Town Meeting Song," and "Making Christmas"; four songs for the main character Jack Skellington "Jack's Lament," "What's This," "Jack's Obsession," and "Poor Jack" all sung by Elfman; and the other character songs "Kidnap The Sandy Claws," "Oogie Boogie's Song," and "Sally's Song." An eleventh song, "Finale/Reprise," reworks lyrics from the songs "This Is Halloween," "What's This" and "Sally's Song" for the film's ending. Though uncredited, Burton contributed some lyrics to "Nightmare", including the line "Perhaps it's the head that I found in the lake" in "Town Meeting Song." Elfman composed five songs for Burton's "Corpse Bride": "According to Plan" with lyrics co-written by screenwriter John August; "Remains of the Day," which he sung as the character Bonejangles, and "Tears To Shed," both with additional lyrics by August, and "The Wedding Song" credited solely to Elfman. The song "Erased" was not used in the final film. He wrote the lyrics to "Lullaby" from "Charlotte's Web", the rock track "The Little Things" from "Wanted" which he also sang in English and Russian, and "Alice's Theme" from "Alice in Wonderland". Elfman co-wrote the music and lyrics to Batman Returns's "Face To Face" with Siouxsie and the Banshees, and co-wrote the lyrics to "Twice The Love" from "Big Fish" and the "Wonka's Welcome Song" for "Charlie and the Chocolate Factory" with John August. Elfman wrote the lyrics to all of Oingo Boingo's original songs 1979–1994, provided lyrics for Mike Oldfield's album "Islands" in 1987, and has made residuals on the titular two-word opening phrase sung in his "The Simpsons" theme since the series first aired in 1989. As a teenager, Elfman dated his classmate Kim Gordon, who would later become one of the members of the rock band Sonic Youth. He has two daughters, Lola (*1979) and Mali (*1984), from his marriage to Geri Eisenmenger. Mali is a film producer and actress. Elfman and his daughter collaborated on her 2011 film "Do Not Disturb". On November 29, 2003, Elfman married actress Bridget Fonda. They have a son, Oliver. In 1998, Elfman scored "A Simple Plan", starring Fonda. He is the uncle of actor Bodhi Elfman, who is married to actress Jenna Elfman. Elfman has been an atheist since the age of 11 or 12. According to him, he is a cynicologist. Describing his politics during the 1980s, Elfman said, "I'm not a doomist. My attitude is always to be critical of what's around you, but not ever to forget how lucky we are. I've traveled around the world. I left thinking I was a revolutionary. I came back real right-wing patriotic. Since then, I've kind of mellowed in between." In 2008, he expressed support for Barack Obama and said that Sarah Palin was his "worst nightmare". During the 18 years with Oingo Boingo, Elfman developed significant hearing damage as a result of the continuous exposure to the high noise levels involved in performing in a rock band. Afraid of worsening his condition, he decided to leave the band, saying that he would never return to that kind of performance. His impairment was so bad that he could not "even sit in a loud restaurant or bar anymore." However, he found performing in front of orchestras more tolerable, and returned several times to reprise his live performance of Jack Skellington. On June 25, 2019, "The New York Times Magazine" listed Danny Elfman among hundreds of artists whose material was reportedly destroyed in the 2008 Universal fire. Since "The Simpsons"' second annual "Treehouse of Horror" episode aired in 1991, launching "scary names" tradition in the opening and closing titles, Elfman has been alternately credited for the theme music as "Red Wolf Elfman," "Danny Skellingelfman," "Li'l Leakin Brain Elfman," "Boris Elfmonivich," "Danny Elfblood," "Danny 'Hell'fman," "The Bloody Elf," "Danny Elfbones," "Elfmunster" and "Daniel Beilzebelsman." Elfman's composition "Clown Dream" from "Pee-wee's Big Adventure" is used in the video game "Grand Theft Auto V" and has often been used as the opening music for Primus concerts. In the 2007 sixth season "Star Wars" parody "Blue Harvest", "Family Guy" lampooned Elfman's orchestral style. A scene shows Elfman replacing an incinerated John Williams to conduct a full orchestra playing the score, only to be decapitated by a lightsaber after conducting a few bars of oom-pah music. Episode five of the 14th season of "South Park" in 2010 criticized Tim Burton for using the "same" music in all his films, referring to Elfman's scores. In October 2016, Elfman produced a video clip for Funny or Die with original "horror" music composed to footage of Donald Trump pacing around Hillary Clinton at the second United States presidential election debates, 2016. In 2019, selections from Elfman's "Midnight Run" score were used in the third season of Netflix's "Stranger Things," including "Stairway Chase" in episodes 5 and 6, and "Wild Ride" and "Package Deal" in episode 6. Elfman's scores for "Batman" and "Edward Scissorhands" were nominated for AFI's 100 Years of Film Scores. Including commercial recordings of his film scores and the Oingo Boingo discography, Elfman has produced over 100 albums as of 2019.
https://en.wikipedia.org/wiki?curid=8397
Dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus a line has a dimension of one (1D) because only one coordinate is needed to specify a point on itfor example, the point at 5 on a number line. A surface such as a plane or the surface of a cylinder or sphere has a dimension of two (2D) because two coordinates are needed to specify a point on itfor example, both a latitude and longitude are required to locate a point on the surface of a sphere. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces. In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space. The concept of dimension is not restricted to physical objects. s frequently occur in mathematics and the sciences. They may be parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space we live in. In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc. The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. The dimension of Euclidean -space is . When trying to generalize to other types of spaces, one is faced with the question "what makes -dimensional?" One answer is that to cover a fixed ball in by small balls of radius , one needs on the order of such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in looks locally like and this leads to the notion of the inductive dimension. While these notions agree on , they turn out to be different when one looks at more general spaces. A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract "has four dimensions"", mathematicians usually express this as: "The tesseract "has dimension 4"", or: "The dimension of the tesseract "is" 4" or: 4D. Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 "Theorie der vielfachen Kontinuität", and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry. The rest of this section examines some of the more important mathematical definitions of dimension. The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the "Hamel dimension" or "algebraic dimension" to distinguish it from other notions of dimension. For the non-free case, this generalizes to the notion of the length of a module. The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean -space, in which the number is the manifold's dimension. For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point. In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases are simplified by having extra space in which to "work"; and the cases and are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, where four different proof methods are applied. The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number ("x" + "iy") has a real part "x" and an imaginary part "y", where x and y are both real numbers; hence, the complex dimension is half the real dimension. Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension. The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety. An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains formula_1 of sub-varieties of the given algebraic set (the length of such a chain is the number of "formula_2"). Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if "V" is a variety of dimension "m" and "G" is an algebraic group of dimension "n" acting on "V", then the quotient stack ["V"/"G"] has dimension "m" − "n". The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length "n" being a sequence formula_3 of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety. For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0. For any normal topological space , the Lebesgue covering dimension of is defined to be the smallest integer "n" for which the following holds: any open cover has an open refinement (a second open cover where each element is a subset of an element in the first cover) such that no point is included in more than elements. In this case dim . For a manifold, this coincides with the dimension mentioned above. If no such integer exists, then the dimension of is said to be infinite, and one writes dim . Moreover, has dimension −1, i.e. dim if and only if is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open". An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a "new direction", one obtains a 2-dimensional object. In general one obtains an ()-dimensional object by dragging an -dimensional object in a "new" direction. The inductive dimension of a topological space may refer to the "small inductive dimension" or the "large inductive dimension", and is based on the analogy that, in the case of metric spaces, balls have -dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1. Similarly, for the class of CW complexes, the dimension of an object is the largest for which the -skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles. The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values. Fractals have been found useful to describe many natural objects and phenomena. Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide. Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies; "i.e.", moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.) A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction. The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy). The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" at such tiny scales as to be effectively invisible to current experiments. Limits on the size and other properties of extra dimensions are set by particle experiments such as those at the Large Hadron Collider. In 1921, Kaluza-Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building. In addition to small and curled up extra dimensions, there may be extra dimensions that instead aren't apparent because the matter associated with our visible universe is localized on a subspace. Thus the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume. Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be because three is the largest number of spatial dimensions where strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration. Extra dimensions are said to be universal if all fields are equally free to propagate within them. Some complex networks are characterized by fractal dimensions. The concept of dimension can be generalized to include networks embedded in space. The dimension characterize their spatial constraints. Science fiction texts often mention the concept of "dimension" when referring to parallel or alternate universes or other imagined planes of existence. This usage is derived from the idea that to travel to parallel/alternate universes/planes of existence one must travel in a direction/dimension besides the standard ones. In effect, the other universes/planes are just a small distance away from our own, but the distance is in a fourth (or higher) spatial (or non-spatial) dimension, not the standard ones. One of the most heralded science fiction stories regarding true geometric dimensionality, and often recommended as a starting point for those just starting to investigate such matters, is the 1884 novella "Flatland" by Edwin A. Abbott. Isaac Asimov, in his foreword to the Signet Classics 1984 edition, described "Flatland" as "The best introduction one can find into the manner of perceiving dimensions." The idea of other dimensions was incorporated into many early science fiction stories, appearing prominently, for example, in Miles J. Breuer's "The Appendix and the Spectacles" (1928) and Murray Leinster's "The Fifth-Dimension Catapult" (1931); and appeared irregularly in science fiction by the 1940s. Classic stories involving other dimensions include Robert A. Heinlein's "—And He Built a Crooked House" (1941), in which a California architect designs a house based on a three-dimensional projection of a tesseract; and Alan E. Nourse's "Tiger by the Tail" and "The Universe Between" (both 1951). Another reference is Madeleine L'Engle's novel "A Wrinkle In Time" (1962), which uses the fifth dimension as a way for "tesseracting the universe" or "folding" space in order to move across it quickly. The fourth and fifth dimensions are also a key component of the book "The Boy Who Reversed Himself" by William Sleator. Immanuel Kant, in 1783, wrote: "That everywhere space (which is not itself the boundary of another space) has three dimensions and that space in general cannot have more dimensions is based on the proposition that not more than three lines can intersect at right angles in one point. This proposition cannot at all be shown from concepts, but rests immediately on intuition and indeed on pure intuition "a priori" because it is apodictically (demonstrably) certain." "Space has Four Dimensions" is a short story published in 1846 by German philosopher and experimental psychologist Gustav Fechner under the pseudonym "Dr. Mises". The protagonist in the tale is a shadow who is aware of and able to communicate with other shadows, but who is trapped on a two-dimensional surface. According to Fechner, this "shadow-man" would conceive of the third dimension as being one of time. The story bears a strong similarity to the "Allegory of the Cave" presented in Plato's "The Republic" (c. 380 BC). Simon Newcomb wrote an article for the "Bulletin of the American Mathematical Society" in 1898 entitled "The Philosophy of Hyperspace". Linda Dalrymple Henderson coined the term "hyperspace philosophy", used to describe writing that uses higher dimensions to explore metaphysical themes, in her 1983 thesis about the fourth dimension in early-twentieth-century art. Examples of "hyperspace philosophers" include Charles Howard Hinton, the first writer, in 1888, to use the word "tesseract"; and the Russian esotericist P. D. Ouspensky. Zero One Two Three Four Higher dimensionsin mathematics Infinite
https://en.wikipedia.org/wiki?curid=8398
Duodecimal The duodecimal system (also known as base 12, dozenal, or rarely uncial) is a positional notation numeral system using twelve as its base. The number twelve (that is, the number written as "12" in the base ten numerical system) is instead written as "10" in duodecimal (meaning "1 dozen and 0 units", instead of "1 ten and 0 units"), whereas the digit string "12" means "1 dozen and 2 units" (i.e. the same number that in decimal is written as "14"). Similarly, in duodecimal "100" means "1 gross", "1000" means "1 great gross", and "0.1" means "1 twelfth" (instead of their decimal meanings "1 hundred", "1 thousand", and "1 tenth"). The number twelve, a superior highly composite number, is the smallest number with four non-trivial factors (2, 3, 4, 6), and the smallest to include as factors all four numbers (1 to 4) within the subitizing range, and the smallest abundant number. As a result of this increased factorability of the radix and its divisibility by a wide range of the most elemental numbers (whereas ten has only two non-trivial factors: 2 and 5, and not 3, 4, or 6), duodecimal representations fit more easily than decimal ones into many common patterns, as evidenced by the higher regularity observable in the duodecimal multiplication table. As a result, duodecimal has been described as the optimal number system. Of its factors, 2 and 3 are prime, which means the reciprocals of all 3-smooth numbers (such as 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 27, 32, 36, ...) have a terminating representation in duodecimal. In particular, the five most elementary fractions (, , , and ) all have a short terminating representation in duodecimal (0.6, 0.4, 0.8, 0.3 and 0.9, respectively), and twelve is the smallest radix with this feature (because it is the least common multiple of 3 and 4). This all makes it a more convenient number system for computing fractions than most other number systems in common use, such as the decimal, vigesimal, binary, octal and hexadecimal systems. Although the trigesimal and sexagesimal systems (where the reciprocals of all 5-smooth numbers terminate) do even better in this respect, this is at the cost of unwieldy multiplication tables and a much larger number of symbols to memorize. Various symbols have been used to stand for ten and eleven in duodecimal notation; Unicode includes () and (). Using these symbols, a count from zero to twelve in duodecimal reads: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, , , 10. These were implemented in Unicode 8.0 (2015), but most general Unicode fonts in use by current operating systems and browsers have not yet included them. A more common alternative is to use A and B, as in hexadecimal, and this page uses and . Languages using duodecimal number systems are uncommon. Languages in the Nigerian Middle Belt such as Janji, Gbiri-Niragu (Gure-Kahugu), Piti, and the Nimbia dialect of Gwandara; and the Chepang language of Nepal are known to use duodecimal numerals. Germanic languages have special words for 11 and 12, such as "eleven" and "twelve" in English. However, they come from Proto-Germanic *"ainlif" and *"twalif" (meaning, respectively "one left" and "two left"), suggesting a decimal rather than duodecimal origin. Historically, units of time in many civilizations are duodecimal. There are twelve signs of the zodiac, twelve months in a year, and the Babylonians had twelve hours in a day (although at some point this was changed to 24). Traditional Chinese calendars, clocks, and compasses are based on the twelve Earthly Branches. There are 12 inches in an imperial foot, 12 troy ounces in a troy pound, 12 old British pence in a shilling, 24 (12×2) hours in a day, and many other items counted by the dozen, gross (144, square of 12), or great gross (1728, cube of 12). The Romans used a fraction system based on 12, including the uncia which became both the English words "ounce" and "inch". Pre-decimalisation, Ireland and the United Kingdom used a mixed duodecimal-vigesimal currency system (12 pence = 1 shilling, 20 shillings or 240 pence to the pound sterling or Irish pound), and Charlemagne established a monetary system that also had a mixed base of twelve and twenty, the remnants of which persist in many places. The importance of 12 has been attributed to the number of lunar cycles in a year as well as the fact that humans have 12 finger bones (phalanges) on one hand (three in each of four fingers). It is possible to count to 12 with the thumb acting as a pointer, touching each finger bone in turn. A traditional finger counting system still in use in many regions of Asia works in this way and could help to explain the occurrence of numeral systems based on 12 and 60 besides those based on 10, 20, and 5. In this system, the one (usually right) hand counts repeatedly to 12, displaying the number of iterations on the other (usually left), until five dozens, i.e. the 60, are full. In a duodecimal place system, twelve is written as 10 but there are numerous proposals for how to write ten and eleven. To allow entry on typewriters, letters such as A and B (as in hexadecimal), T and E (initials of Ten and Eleven), X and E (X from the Roman numeral for ten), or X and Z are used. Some employ Greek letters such as δ (standing for Greek δέκα 'ten') and ε (for Greek ένδεκα 'eleven'), or τ and ε. Frank Emerson Andrews, an early American advocate for duodecimal, suggested and used in his book "New Numbers" an X and ℰ (script E, ). Edna Kramer in her 1951 book "The Main Stream of Mathematics" used a six-pointed asterisk (sextile) ⚹ and a hash (or octothorpe) #. The symbols were chosen because they are available on typewriters; they are also on push-button telephones. This notation was used in publications of the Dozenal Society of America (DSA) from 1974–2008. From 2008 to 2015, the DSA used and , the symbols devised by William Addison Dwiggins. The Dozenal Society of Great Britain (DSGB) proposed symbols and . This notation, derived from Arabic digits by 180° rotation, was introduced by Sir Isaac Pitman. In March 2013, a proposal was submitted to include the digit forms for ten and eleven propagated by the Dozenal Societies in the Unicode Standard. Of these, the British/Pitman forms were accepted for encoding as characters at code points and . They were included in the Unicode 8.0 release in June 2015 and are available in LaTeX as codice_1 and codice_2. After the Pitman digits were added to Unicode, the DSA took a vote and then began publishing content using the Pitman digits instead. They still use the letters X and E in ASCII text. As the Unicode characters are poorly supported, this page uses and . Other proposals are more creative or aesthetic; for example, many do not use any Arabic numerals under the principle of "separate identity." There are also varying proposals of how to distinguish a duodecimal number from a decimal one. They include italicizing duodecimal numbers ""54" = 64", adding a "Humphrey point" (a semicolon instead of a decimal point) to duodecimal numbers "54;6 = 64.5", or some combination of the two. Others use subscript or affixed labels to indicate the base, allowing for more than decimal and duodecimal to be represented (for single letters 'z' from "dozenal" is used as 'd' would mean decimal) such as "54z = 64d," "5412 = 6410" or "doz 54 = dec 64." The Dozenal Society of America suggested the pronunciation of ten and eleven as "dek" and "el". For the names of powers of twelve there are two prominent systems. In this system, the prefix "e"- is added for fractions. Multiple digits in this series are pronounced differently: 12 is "do two"; 30 is "three do"; 100 is "gro"; 9 is "el gro dek do nine"; 86 is "el gro eight do six"; 8,15 is "eight gro el do el mo, one gro five do dek"; and so on. This system uses "-qua" ending for the positive powers of 12 and "-cia" ending for the negative powers of 12, and an extension of the IUPAC systematic element names (with syllables dec and lev for the two extra digits needed for duodecimal) to express which power is meant. William James Sidis used 12 as the base for his constructed language Vendergood in 1906, noting it being the smallest number with four factors and its prevalence in commerce. The case for the duodecimal system was put forth at length in Frank Emerson Andrews' 1935 book "New Numbers: How Acceptance of a Duodecimal Base Would Simplify Mathematics". Emerson noted that, due to the prevalence of factors of twelve in many traditional units of weight and measure, many of the computational advantages claimed for the metric system could be realized "either" by the adoption of ten-based weights and measure "or" by the adoption of the duodecimal number system. Both the Dozenal Society of America and the Dozenal Society of Great Britain promote widespread adoption of the base-twelve system. They use the word "dozenal" instead of "duodecimal" to avoid the more overtly base-ten terminology. However, the etymology of "dozenal" itself is also an expression based on base-ten terminology since "dozen" is a direct derivation of the French word "douzaine" which is a derivative of the French word for twelve, "douze" which is related to the old French word "doze" from Latin "duodecim". Since at least as far back as 1945 some members of the Dozenal Society of America and Dozenal Society of Great Britain have suggested that a more apt word would be "uncial". Uncial is a derivation of the Latin word "uncia", meaning "one-twelfth", and also the base-twelve analogue of the Latin word "decima", meaning "one-tenth". Mathematician and mental calculator Alexander Craig Aitken was an outspoken advocate of duodecimal: In Lee Carroll's "Kryon: Alchemy of the Human Spirit", a chapter is dedicated to the advantages of the duodecimal system. The duodecimal system is supposedly suggested by Kryon (a fictional entity believed in by New Age circles) for all-round use, aiming at a better and more natural representation of the nature of the Universe through mathematics. An individual article "Mathematica" by James D. Watt (included in the above publication) exposes a few of the unusual symmetry connections between the duodecimal system and the golden ratio, and provides numerous number symmetry-based arguments for the universal nature of the base-12 number system. In "Little Twelvetoes", American television series "Schoolhouse Rock!" portrayed an alien child using base-twelve arithmetic, using "dek", "el" and "doh" as names for ten, eleven and twelve, and Andrews' script-X and script-E for the digit symbols. Systems of measurement proposed by dozenalists include: The number 12 has six factors, which are 1, 2, 3, 4, 6, and 12, of which 2 and 3 are prime. The decimal system has only four factors, which are 1, 2, 5, and 10, of which 2 and 5 are prime. Vigesimal (base 20) adds two factors to those of ten, namely 4 and 20, but no additional prime factor. Although twenty has 6 factors, 2 of them prime, similarly to twelve, it is also a much larger base, and so the digit set and the multiplication table are much larger. Binary has only two factors, 1 and 2, the latter being prime. Hexadecimal (base 16) has five factors, adding 4, 8 and 16 to those of 2, but no additional prime. Trigesimal (base 30) is the smallest system that has three different prime factors (all of the three smallest primes: 2, 3 and 5) and it has eight factors in total (1, 2, 3, 5, 6, 10, 15, and 30). Sexagesimal—which the ancient Sumerians and Babylonians among others actually used—adds the four convenient factors 4, 12, 20, and 60 to this but no new prime factors. The smallest system that has four different prime factors is base 210 and the pattern follows the primorials. In all base systems, there are similarities to the representation of multiples of numbers which are one less than the base. To convert numbers between bases, one can use the general conversion algorithm (see the relevant section under positional notation). Alternatively, one can use digit-conversion tables. The ones provided below can be used to convert any duodecimal number between 0;01 and ,; to decimal, or any decimal number between 0.01 and 999,999.99 to duodecimal. To use them, the given number must first be decomposed into a sum of numbers with only one significant digit each. For example: This decomposition works the same no matter what base the number is expressed in. Just isolate each non-zero digit, padding them with as many zeros as necessary to preserve their respective place values. If the digits in the given number include zeroes (for example, 102,304.05), these are, of course, left out in the digit decomposition (102,304.05 = 100,000 + 2,000 + 300 + 4 + 0.05). Then the digit conversion tables can be used to obtain the equivalent value in the target base for each digit. If the given number is in duodecimal and the target base is decimal, we get: Now, because the summands are already converted to base ten, the usual decimal arithmetic is used to perform the addition and recompose the number, arriving at the conversion result: That is, (duodecimal) 123,456.78 equals (decimal) 296,130.63 ≈ 296,130.64 If the given number is in decimal and the target base is duodecimal, the method is basically same. Using the digit conversion tables: (decimal) 100,000 + 20,000 + 3,000 + 400 + 50 + 6 + 0.7 + 0.08 = (duodecimal) 49,54 + ,68 + 1,80 + 294 + 42 + 6 + 0;84972497249724972497... + 0;062... However, in order to do this sum and recompose the number, now the addition tables for the duodecimal system have to be used, instead of the addition tables for decimal most people are already familiar with, because the summands are now in base twelve and so the arithmetic with them has to be in duodecimal as well. In decimal, 6 + 6 equals 12, but in duodecimal it equals 10; so, if using decimal arithmetic with duodecimal numbers one would arrive at an incorrect result. Doing the arithmetic properly in duodecimal, one gets the result: That is, (decimal) 123,456.78 equals (duodecimal) 5,540;9... ≈ 5,540;94 This section is about the divisibility rules in duodecimal. Any integer is divisible by 1. If a number is divisible by 2 then the unit digit of that number will be 0, 2, 4, 6, 8 or . If a number is divisible by 3 then the unit digit of that number will be 0, 3, 6 or 9. If a number is divisible by 4 then the unit digit of that number will be 0, 4 or 8. To test for divisibility by 5, double the units digit and subtract the result from the number formed by the rest of the digits. If the result is divisible by 5 then the given number is divisible by 5. This rule comes from 21(5*5) Examples: 13     rule => |1-2*3| = 5 which is divisible by 5. 25   rule => |2-2*5| = 20(5*70) which is divisible by 5(or apply the rule on 20). OR To test for divisibility by 5, subtract the units digit and triple of the result to the number formed by the rest of the digits. If the result is divisible by 5 then the given number is divisible by 5. This rule comes from 13(5*3) Examples: 13     rule => |3-3*1| = 0 which is divisible by 5. 25   rule => |5-3*2| = 81(5*195) which is divisible by 5(or apply the rule on 81). OR Form the alternating sum of blocks of two from right to left. If the result is divisible by 5 then the given number is divisible by 5. This rule comes from 101, since 101 = 5*25, thus this rule can be also tested for the divisibility by 25. Example: 97,374,627 => 27-46+37-97 = -7 which is divisible by 5. If a number is divisible by 6 then the unit digit of that number will be 0 or 6. To test for divisibility by 7, triple the units digit and add the result to the number formed by the rest of the digits. If the result is divisible by 7 then the given number is divisible by 7. This rule comes from 2(7*5) Examples: 12     rule => |3*2+1| = 7 which is divisible by 7. 271    rule => |3*+271| = 29(7*4) which is divisible by 7(or apply the rule on 29). OR To test for divisibility by 7, subtract the units digit and double the result from the number formed by the rest of the digits. If the result is divisible by 7 then the given number is divisible by 7. This rule comes from 12(7*2) Examples: 12     rule => |2-2*1| = 0 which is divisible by 7. 271    rule => |-2*271| = 513(7*89) which is divisible by 7(or apply the rule on 513). OR To test for divisibility by 7, 4 times the units digit and subtract the result from the number formed by the rest of the digits. If the result is divisible by 7 then the given number is divisible by 7. This rule comes from 41(7*7) Examples: 12     rule => |4*2-1| = 7 which is divisible by 7. 271    rule => |4*-271| = 235(7*3) which is divisible by 7(or apply the rule on 235). OR Form the alternating sum of blocks of three from right to left. If the result is divisible by 7 then the given number is divisible by 7. This rule comes from 1001, since 1001 = 7*11*17, thus this rule can be also tested for the divisibility by 11 and 17. Example: 386,967,443 => 443-967+386 = -168 which is divisible by 7. If the 2-digit number formed by the last 2 digits of the given number is divisible by 8 then the given number is divisible by 8. Example: 148, 4120 If the 2-digit number formed by the last 2 digits of the given number is divisible by 9 then the given number is divisible by 9. Example: 7423, 8330 If the number is divisible by 2 and 5 then the number is divisible by . If the sum of the digits of a number is divisible by then the number is divisible by (the equivalent of casting out nines in decimal). Example: 29, 6113 If a number is divisible by 10 then the unit digit of that number will be 0. Sum the alternate digits and subtract the sums. If the result is divisible by 11 the number is divisible by 11 (the equivalent of divisibility by eleven in decimal). Example: 66, 9427 If the number is divisible by 2 and 7 then the number is divisible by 12. If the number is divisible by 3 and 5 then the number is divisible by 13. If the 2-digit number formed by the last 2 digits of the given number is divisible by 14 then the given number is divisible by 14. Example: 1468, 7394 Duodecimal fractions may be simple: or complicated: As explained in recurring decimals, whenever an irreducible fraction is written in radix point notation in any base, the fraction can be expressed exactly (terminates) if and only if all the prime factors of its denominator are also prime factors of the base. Thus, in base-ten (= 2×5) system, fractions whose denominators are made up solely of multiples of 2 and 5 terminate:  = ,  =  and  =  can be expressed exactly as 0.125, 0.05 and 0.002 respectively. and , however, recur (0.333... and 0.142857142857...). In the duodecimal (= 2×2×3) system, is exact; and recur because they include 5 as a factor; is exact; and recurs, just as it does in decimal. The number of denominators which give terminating fractions within a given number of digits, say "n", in a base "b" is the number of factors (divisors) of "bn", the "n"th power of the base "b" (although this includes the divisor 1, which does not produce fractions when used as the denominator). The number of factors of "bn" is given using its prime factorization. For decimal, 10"n" = 2"n" * 5"n". The number of divisors is found by adding one to each exponent of each prime and multiplying the resulting quantities together. Factors of 10"n" = ("n"+1)("n"+1) = ("n"+1)2. For example, the number 8 is a factor of 103 (1000), so 1/8 and other fractions with a denominator of 8 cannot require more than 3 fractional decimal digits to terminate. 5/8 = 0.625ten For duodecimal, 12"n" = 22"n" * 3"n". This has (2"n"+1)("n"+1) divisors. The sample denominator of 8 is a factor of a gross (122 = 144), so eighths cannot need more than two duodecimal fractional places to terminate. 5/8 = 0;76twelve Because both ten and twelve have two unique prime factors, the number of divisors of "bn" for "b" = 10 or 12 grows quadratically with the exponent "n" (in other words, of the order of "n"2). The Dozenal Society of America argues that factors of 3 are more commonly encountered in real-life division problems than factors of 5. Thus, in practical applications, the nuisance of repeating decimals is encountered less often when duodecimal notation is used. Advocates of duodecimal systems argue that this is particularly true of financial calculations, in which the twelve months of the year often enter into calculations. However, when recurring fractions "do" occur in duodecimal notation, they are less likely to have a very short period than in decimal notation, because 12 (twelve) is between two prime numbers, 11 (eleven) and 13 (thirteen), whereas ten is adjacent to the composite number 9. Nonetheless, having a shorter or longer period doesn't help the main inconvenience that one does not get a finite representation for such fractions in the given base (so rounding, which introduces inexactitude, is necessary to handle them in calculations), and overall one is more likely to have to deal with infinite recurring digits when fractions are expressed in decimal than in duodecimal, because one out of every three consecutive numbers contains the prime factor 3 in its factorization, whereas only one out of every five contains the prime factor 5. All other prime factors, except 2, are not shared by either ten or twelve, so they do not influence the relative likeliness of encountering recurring digits (any irreducible fraction that contains any of these other factors in its denominator will recur in either base). Also, the prime factor 2 appears twice in the factorization of twelve, whereas only once in the factorization of ten; which means that most fractions whose denominators are powers of two will have a shorter, more convenient terminating representation in duodecimal than in decimal representation (e.g. 1/(22) = 0.25 ten = 0.3 12; 1/(23) = 0.125 ten = 0.16 twelve; 1/(24) = 0.0625 10 = 0.09 12; 1/(25) = 0.03125 10 = 0.046 12; etc.). The duodecimal period length of 1/"n" are (in base 10) The duodecimal period length of 1/("n"th prime) are (in base 10) Smallest prime with duodecimal period "n" are (in base 10) The representations of irrational numbers in any positional number system (including decimal and duodecimal) neither terminate nor repeat. The following table gives the first digits for some important algebraic and transcendental numbers in both decimal and duodecimal.
https://en.wikipedia.org/wiki?curid=8400
David Hayes Agnew David Hayes Agnew (November 24, 1818March 22, 1892) was an American surgeon. Agnew was born on November 24, 1818, Nobleville, Pennsylvania (present-day Christiana). His parents were Robert Agnew and Agnes Noble. Agnew grew up as a Christian. He was surrounded by a family of doctors and had always known he was going to become a physician. As a young boy, he had a sharp sense of humor and was very intelligent. He graduated from the University of Pennsylvania School of Medicine in 1838. He returned to Nobleville to help his father in his clinic. He worked there for two years. His father was an asthmatic and moved to Maryland in 1840 because the climate was more suited to his condition. Agnew moved with him. On November 21, 1841, he married Margaret Irwin. In 1852, he bought and revived the Philadelphia School of Anatomy. He held responsibility for ten years until 1862. During the American Civil War he was consulting surgeon in the Mower Army Hospital, near Philadelphia, and acquired a considerable reputation for his operations in cases of gunshot wounds. On December 21, 1863, he became the Demonstrator of Anatomy and Assistant Lecturer on Clinical Surgery at The University of Pennsylvania. Later, he was requested to assist the Professor of Surgery in the Conduct of the surgical clinics. In the year 1865, he gave summer instruction courses. For the next seven years, he worked for the University as Demonstrator of Anatomy. A large portion of his success was due to his wife's energy, intelligence, and determination. She gave him an impetus to try harder and not be satisfied with his first try. On July 2, 1881, President James A. Garfield was shot by Charles J. Guiteau. He held the position of chief consulting surgeon. When a committee came to give him his money for helping, Agnew said, "Gentlemen, I present no bill for my attendance to President Garfield. I gave my services freely and gratuitously". He was never optimistic about the President's case and was not fooled by fallacious beliefs. This procedure helped create Agnew's reputation. "The Agnew Clinic" is an 1889 painting by Thomas Eakins which depicts Agnew conducting a mastectomy operation before a gallery of students and doctors. David Agnew wrote "The Principles and Practice of Surgery". It was a three-volume set that he published from 1878–1883. He also helped found the Irwin & Agnew Iron Foundry in 1846. Agnew caught a severe attack of epidemic influenza in 1890. He never fully recovered. Following this, he had an attack of broncho-vesicular catarrh. On March 9, 1892, he was put to bed for a series of medical problems. After a few days his condition began to improve, but suddenly, on March 12 it became much worse. On March 20, he fell into a comatose condition. Agnew stayed like this until he died at 3:20 p.m. on March 22, 1892. He is now buried in West Laurel Hill Cemetery.
https://en.wikipedia.org/wiki?curid=8401
Diving (sport) Diving is the sport of jumping or falling into water from a platform or springboard, usually while performing acrobatics. Diving is an internationally recognized sport that is part of the Olympic Games. In addition, unstructured and non-competitive diving is a recreational pastime. Competitors possess many of the same characteristics as gymnasts and dancers, including strength, flexibility, kinaesthetic judgment and air awareness. Some professional divers were originally gymnasts or dancers as both the sports have similar characteristics to diving. Dmitri Sautin holds the record for most Olympic diving medals won, by winning eight medals in total between 1992 and 2008. Although diving has been a popular pastime across the world since ancient times, the first modern diving competitions were held in England in the 1880s. The exact origins of the sport are unclear, though it likely derives from the act of diving at the start of swimming races. The 1904 book "Swimming" by Ralph Thomas notes English reports of plunging records dating back to at least 1865. The 1877 edition to "British Rural Sports" by John Henry Walsh makes note of a "Mr. Young" plunging in 1870, and also states that 25 years prior, a swimmer named Drake could cover . The English Amateur Swimming Association (at the time called the Swimming Association of Great Britain) first started a "plunging championship" in 1883. The Plunging Championship was discontinued in 1937. Diving into a body of water had also been a method used by gymnasts in Germany and Sweden since the early 19th century. The soft landing allowed for more elaborate gymnastic feats in midair as the jump could be made from a greater height. This tradition evolved into 'fancy diving', while diving as a preliminary to swimming became known as 'Plain diving'. In England, the practice of high diving – diving from a great height – gained popularity; the first diving stages were erected at the Highgate Ponds at a height of in 1893 and the first world championship event, the National Graceful Diving Competition, was held there by the Royal Life Saving Society in 1895. The event consisted of standing and running dives from either . It was at this event that the Swedish tradition of fancy diving was introduced to the sport by the athletes Otto Hagborg and C F Mauritzi. They demonstrated their acrobatic techniques from the 10m diving board at Highgate Pond and stimulated the establishment of the Amateur Diving Association in 1901, the first organization devoted to diving in the world (later amalgamated with the Amateur Swimming Association). Fancy diving was formally introduced into the championship in 1903. Plain diving was first introduced into the Olympics at the 1904 event. The 1908 Olympics in London added 'fancy diving' and introduced elastic boards rather than fixed platforms. Women were first allowed to participate in the diving events for the 1912 Olympics in Stockholm. In the 1928 Olympics, 'plain' and 'fancy' diving were amalgamated into one event – 'Highboard Diving'. The diving event was first held indoors in the Empire Pool for the 1934 British Empire Games and 1948 Summer Olympics in London. Most diving competitions consist of three disciplines: 1 m and 3 m springboards, and the platform. Competitive athletes are divided by gender, and often by age group. In platform events, competitors are allowed to perform their dives on either the five, seven and a half (generally just called seven), nine, or ten meter towers. In major diving meets, including the Olympic Games and the World Championships, platform diving is from the 10 meter height. Divers have to perform a set number of dives according to established requirements, including somersaults and twists. Divers are judged on whether and how well they completed all aspects of the dive, the conformance of their body to the requirements of the dive, and the amount of splash created by their entry to the water. A possible score out of ten is broken down into three points for the takeoff (meaning the hurdle), three for the flight (the actual dive), and three for the entry (how the diver hits the water), with one more available to give the judges flexibility. The raw score is multiplied by a degree of difficulty factor, derived from the number and combination of movements attempted. The diver with the highest total score after a sequence of dives is declared the winner. Synchronized diving was adopted as an Olympic sport in 2000. Two divers form a team and perform dives simultaneously. The dives are identical. It used to be possible to dive opposites, also known as a pinwheel, but this is no longer part of competitive synchronized diving. For example, one diver would perform a forward dive and the other an inward dive in the same position, or one would do a reverse and the other a back movement. In these events, the diving would be judged both on the quality of execution and the synchronicity – in timing of take-off and entry, height and forward travel. There are rules governing the scoring of a dive. Usually a score considers three elements of the dive: the approach, the flight, and the entry. The primary factors affecting the scoring are: Each dive is assigned a "degree of difficulty" (DD), which is determined from a combination of the moves undertaken, position used, and height. The DD value is multiplied by the scores given by the judges. To reduce the subjectivity of scoring in major meets, panels of five or seven judges are assembled; major international events such as the Olympics use seven-judge panels. For a five-judge panel, the highest and lowest scores are discarded and the middle three are summed and multiplied by the DD. For seven-judge panels, as of the 2012 London Olympics, the two highest scores and two lowest are discarded, leaving three to be summed and multiplied by the DD. (Prior to the London Olympics, the highest and lowest scores were eliminated, and the remaining five scores were multiplied by , to allow for comparison to five-judge panels.) The canceling of scores is used to make it difficult for a single judge to manipulate scores. There is a general misconception about scoring and judging. In serious meets, the absolute score is somewhat meaningless. It is the relative score, not the absolute score that wins meets. Accordingly, good judging implies consistent scoring across the dives. Specifically, if a judge consistently gives low scores for all divers, or consistently gives high scores for the same divers, the judging will yield fair relative results and will cause divers to place in the correct order. However, absolute scores have significance to the individual divers. Besides the obvious instances of setting records, absolute scores are also used for rankings and qualifications for higher level meets. In synchronised diving events, there is a panel of seven, nine, or eleven judges; two or three to mark the execution of one diver, two or three to mark the execution of the other, and the remaining three or five to judge the synchronisation. The execution judges are positioned two on each side of the pool, and they score the diver which is nearer to them. The 2012 London Olympics saw the first use of eleven judges. The score is computed similarly to the scores from other diving events, but has been modified starting with the 2012 London Olympics for the use of the larger judging panels. Each group of judges will have the highest and lowest scores dropped, leaving the middle score for each diver's execution and the three middle scores for synchronization. The total is then weighted by and multiplied by the DD. The result is that the emphasis is on the synchronization of the divers. The synchronisation scores are based on: The judges may also disqualify the diver for certain violations during the dive, including: To win dive meets, divers create a dive list in advance of the meet. To win the meet the diver must accumulate more points than other divers. Often, simple dives with low DDs will look good to spectators but will not win meets. The competitive diver will attempt the highest DD dives possible with which they can achieve consistent, high scores. If divers are scoring 8 or 9 on most dives, it may be a sign of their extreme skill, or it may be a sign that their dive list is not competitive, and they may lose the meet to a diver with higher DDs and lower scores. In competition, divers must submit their lists beforehand, and once past a deadline (usually when the event is announced or shortly before it begins) they cannot change their dives. If they fail to perform the dive announced, even if they physically cannot execute the dive announced or if they perform a more difficult dive, they will receive a score of zero. Under exceptional circumstances, a redive may be granted, but these are exceedingly rare (usually for very young divers just learning how to compete, or if some event outside the diver's control has caused them to be unable to perform-such as a loud noise). In the Olympics or other highly competitive meets, many divers will have nearly the same list of dives as their competitors. The importance for divers competing at this level is not so much the DD, but how they arrange their list. Once the more difficult rounds of dives begin it is important to lead off with a confident dive to build momentum. They also tend to put a very confident dive in front of a very difficult dive to ensure that they will have a good mentality for the difficult dive. Most divers have pre-dive and post-dive rituals that help them either maintain or regain focus. Coaches also play a role in this aspect of the sport. Many divers rely on their coaches to help keep their composure during the meet. In a large meet coaches are rarely allowed on the deck to talk to their athlete so it is common to see coaches using hand gestures or body movements to communicate. There are some American meets which will allow changes of the position of the dive even after the dive has been announced immediately before execution, but these are an exception to the rules generally observed internationally. Generally, NCAA rules allow for dives to be changed while the diver is on the board, but the diver must request the change directly after the dive is announced. This applies especially in cases where the wrong dive is announced. If the diver pauses during his or her hurdle to ask for a change of dive, it will be declared a balk (when the diver stops mid-hurdle) and the change of dive will not be permitted. Under FINA law, no dive may be changed after the deadline for the dive-sheet to be submitted (generally a period ranging from one hour to 24 hours, depending on the rulings made by the event organiser). It is the diver's responsibility to ensure that the dive-sheet is filled in correctly, and also to correct the referee or announcer before the dive if they describe it incorrectly. If a dive is performed which is as submitted but not as (incorrectly) announced, it is declared failed and scores zero according to a strict reading of the FINA law. But in practice, a re-dive would usually be granted in these circumstances. The global governing body of diving is FINA, which also governs swimming, synchronised swimming, water polo and open water swimming. Almost invariably, at national level, diving shares a governing body with the other aquatic sports. This is frequently a source of political friction as the committees are naturally dominated by swimming officials who do not necessarily share or understand the concerns of the diving community. Divers often feel, for example, that they do not get adequate support over issues like the provision of facilities. Other areas of concern are the selection of personnel for the specialised Diving committees and for coaching and officiating at events, and the team selection for international competitions. There are sometimes attempts to separate the governing body as a means to resolve these frustrations, but they are rarely successful. For example, in the UK the Great Britain Diving Federation was formed in 1992 with the intention of taking over the governance of Diving from the ASA (Amateur Swimming Association). Although it initially received widespread support from the diving community, the FINA requirement that international competitors had to be registered with their National Governing Body was a major factor in the abandonment of this ambition a few years later. Since FINA refused to rescind recognition of the ASA as the British governing body for all aquatic sports including diving, this meant that the elite divers had to belong to ASA-affiliated clubs to be eligible for selection to international competition. In the United States scholastic diving is almost always part of the school's swim team. Diving is a separate sport in Olympic and Club Diving. The NCAA will separate diving from swimming in special diving competitions after the swim season is completed. Despite the apparent risk, the statistical incidence of injury in supervised training and competition is extremely low. The majority of accidents that are classified as 'diving-related' are incidents caused by individuals jumping from structures such as bridges or piers into water of inadequate depth. Many accidents also occur when divers do not account for rocks and logs in the water. Because of this many beaches and pools prohibit diving in shallow waters or when a lifeguard is not on duty. After an incident in Washington in 1993, most US and other pool builders are reluctant to equip a residential swimming pool with a diving springboard so home diving pools are much less common these days. In the incident, 14-year-old Shawn Meneely made a "suicide dive" (holding his hands at his sides, so that his head hit the bottom first) in a private swimming pool and became a tetraplegic. The lawyers for the family, Jan Eric Peterson and Fred Zeder, successfully sued the diving board manufacturer, the pool builder, and the National Spa and Pool Institute over the inappropriate depth of the pool. The NSPI had specified a minimum depth of 7 ft 6 in (2.29 m) which proved to be insufficient in the above case. The pool into which Meneely dived was not constructed to the published standards. The standards had changed after the diving board was installed on the non-compliant pool by the homeowner. But the courts held that the pool "was close enough" to the standards to hold NSPI liable. The multimillion-dollar lawsuit was eventually resolved in 2001 for US$6.6 million ($8 million after interest was added) in favor of the plaintiff. The NSPI was held to be liable, and was financially strained by the case. It filed twice for Chapter 11 bankruptcy protection and was successfully reorganized into a new swimming pool industry association. In competitive diving, FINA takes regulatory steps to ensure that athletes are protected from the inherent dangers of the sport. For example, they impose restrictions according to age on the heights of platforms which divers may compete on. Group D divers have only recently been allowed to compete on the tower. In the past, the age group could compete only springboard, to discourage children from taking on the greater risks of tower diving. Group D tower was introduced to counteract the phenomenon of coaches pushing young divers to compete in higher age categories, thus putting them at even greater risk. However, some divers may safely dive in higher age categories to dive on higher platforms. Usually this occurs when advanced Group C divers wish to compete on the 10 m. Points on pool depths in connection with safety: There are six "groups" into which dives are classified: "Forward, Back, Inward, Reverse, Twist," and "Armstand". The latter applies only to Platform competitions, whereas the other five apply to both Springboard and Platform. During the flight of the dive, one of four positions is assumed: These positions are referred to by the letters A, B, C and D respectively. Additionally, some dives can be started in a flying position. The body is kept straight with the arms extended to the side, and the regular dive position is assumed at about half the dive. Difficulty is rated according to the Degree of Difficulty of the dives. Some divers may find pike easier in a flip than tuck, and most find straight the easiest in a front/back dive, although it is still rated the most difficult because of the risk of overrotation. An armstand dive may have a higher degree of difficulty outdoors compared to indoors as wind can destabilize the equilibrium of the diver. In competition, the dives are referred to by a schematic system of three- or four-digit numbers. The letter to indicate the position is appended to the end of the number. The first digit of the number indicates the dive group as defined above. For groups 1 to 4, the number consists of three digits and a letter of the alphabet. The third digit represents the number of half-somersaults. The second digit is either 0 or 1, with 0 representing a normal somersault, and 1 signifying a "flying" variation of the basic movement (i.e. the first half somersault is performed in the straight position, and then the pike or tuck shape is assumed). No flying dive has been competed at a high level competition for many years. For example: For Group 5, the dive number has 4 digits. The first digit indicates that it is a twisting dive. The second digit indicates the group (1–4) of the underlying movement; the third digit indicates the number of half-somersaults, and the fourth indicates the number of half-twists. For example: For Group 6 – Armstand – the dive number has either three or four digits: Three digits for dives without twist and four for dives with twists. In non-twisting armstand dives, the second digit indicates the direction of rotation (0 = no rotation, 1 = forward, 2 = backward, 3 = reverse, 4 = inward) and the third digit indicates the number of half-somersaults. Inward-rotating armstand dives have never been performed, and are generally regarded as physically impossible. For example: For twisting Armstand dives, the dive number again has 4 digits, but rather than beginning with the number 5, the number 6 remains as the first digit, indicating that the "twister" will be performed from an Armstand. The second digit indicates the direction of rotation – as above, the third is the number of half-somersaults, and the fourth is the number of half-twists: e.g. 6243D – armstand back double-somersault with one and a half twists in the free position All of these dives come with DD (degree of difficulty) this is an indication of how difficult/complex a dive is. The score that the dive receives is multiplied by the DD (also known as tariff) to give the dive a final score. Before a diver competes they must decide on a "list" this is a number of optional dives and compulsory dives. The optionals come with a DD limit. this means that a diver must select X number of dives and the combined DD limit must be no more than the limit set by the competition/organisation etc. Until the mid-1990s the tariff was decided by the FINA diving committee, and divers could only select from the range of dives in the published tariff table. Since then, the tariff is calculated by a formula based on various factors such as the number of twist and somersaults, the height, the group etc., and divers are free to submit new combinations. This change was implemented because new dives were being invented too frequently for an annual meeting to accommodate the progress of the sport. At the moment of take-off, two critical aspects of the dive are determined, and cannot subsequently be altered during the execution. One is the trajectory of the dive, and the other is the magnitude of the angular momentum. The speed of rotation – and therefore the total amount of rotation – may be varied from moment to moment by changing the shape of the body, in accordance with the law of conservation of angular momentum. The center of mass of the diver follows a parabolic path in free-fall under the influence of gravity (ignoring the effects of air resistance, which are negligible at the speeds involved). Since the parabola is symmetrical, the travel away from the board as the diver passes it is twice the amount of the forward travel at the peak of the flight. Excessive forward distance to the entry point is penalized when scoring a dive, but obviously an adequate clearance from the diving board is essential on safety grounds. The greatest possible height that can be achieved is desirable for several reasons: The magnitude of angular momentum remains constant throughout the dive, but since and the moment of inertia is larger when the body has an increased radius, the speed of rotation may be increased by moving the body into a compact shape, and reduced by opening out into a straight position. Since the tucked shape is the most compact, it gives the most control over rotational speed, and dives in this position are easier to perform. Dives in the straight position are hardest, since there is almost no scope for altering the speed, so the angular momentum must be created at take-off with a very high degree of accuracy. (A small amount of control is available by moving the position of the arms and by a slight hollowing of the back). The opening of the body for the entry does not stop the rotation, but merely slows it down. The vertical entry achieved by expert divers is largely an illusion created by starting the entry slightly short of vertical, so that the legs are vertical as they disappear beneath the surface. A small amount of additional tuning is available by 'entry save' techniques, whereby underwater movements of the upper body and arms against the viscosity of the water affect the position of the legs. Dives with multiple twists and somersaults are some of the most spectacular movements, as well as the most challenging to perform. The rules state that twisting 'must not be generated manifestly on take-off'. Consequently, divers must use some of the somersaulting angular momentum to generate twisting movements. The physics of twisting can be explained by looking at the components of the angular momentum vector. As the diver leaves the board, the total angular momentum vector is horizontal, pointing directly to the left for a forward dive for example. For twisting rotation to exist, it is necessary to tilt the body sideways after takeoff, so that there is now a small component of this horizontal angular momentum vector along the body's long axis. The tilt can be seen in the photo. The tilting is done by the arms, which are outstretched to the sides just before the twist. When one arm is moved up and the other is moved down (like turning a big steering wheel), the body reacts by tilting to the side, which then begins the twisting rotation. At the completion of the required number of twist rotations, the arm motion is reversed (the steering wheel is turned back), which removes the body's tilt and stops the twisting rotation. An alternative explanation is that the moving arms have precession torque on them which set the body into twisting rotation. Moving the arms back produces opposite torque which stops the twisting rotation. The rules state that the body should be vertical, or nearly so, for entry. Strictly speaking, it is physically impossible to achieve a literally vertical position throughout the entry as there will inevitably still be some rotational momentum while the body is entering the water. Divers therefore attempt to create the illusion of being vertical, especially when performing rapidly rotating multiple somersault movements. For back entries, one technique is to allow the upper body to enter slightly short of vertical so that the continuing rotation leaves the final impression of the legs entering vertically. This is called "Pike save". Another is to use "knee save" movements of scooping the upper body underwater in the direction of rotation so as to counteract the rotation of the legs. The arms must be beside the body for feet-first dives, which are typically competed only on the 1m springboard and only at fairly low levels of 3m springboard, and extended forwards in line for "head-first" dives, which are much more common competitively. It used to be common for the hands to be interlocked with the fingers extended towards the water, but a different technique has become favoured during the last few decades. Now the usual practice is for one hand to grasp the other with palms down to strike the water with a flat surface. This creates a vacuum between the hands, arms and head which, with a vertical entry, will pull down and under any splash until deep enough to have minimal effect on the surface of the water (the so-called "rip entry"). Once a diver is completely under the water they may choose to roll or scoop in the same direction their dive was rotating to pull their legs into a more vertical position. Apart from aesthetic considerations, it is important from a safety point of view that divers reinforce the habit of rolling in the direction of rotation, especially for forward and inward entries. Back injuries such as hyperextension are caused by attempting to re-surface in the opposite direction. Diving from the higher levels increases the danger and likelihood of such injuries. In Canada, elite competitive diving is regulated by DPC (Diving Plongeon Canada), although the individual provinces also have organizational bodies. The main competitive season runs from February to July, although some competitions may be held in January or December, and many divers (particularly international level athletes) will train and compete year round. Most provincial level competitions consist of events for 6 age groups (Groups A, B, C, D, E, and Open) for both genders on each of the three board levels. These age groups roughly correspond to those standardized by FINA, with the addition of a youngest age group for divers 9 and younger, Group E, which does not compete nationally and does not have a tower event (although divers of this age may choose to compete in Group D). The age group Open is so called because divers of any age, including those over 18, may compete in these events, so long as their dives meet a minimum standard of difficulty. Although Canada is internationally a fairly strong country in diving, the vast majority of Canadian high schools and universities do not have diving teams, and many Canadian divers accept athletic scholarships from American colleges. Adult divers who are not competitive at an elite level may compete in masters diving. Typically, masters are either adults who never practiced the sport as children or teenagers, or former elite athletes who have retired but still seek a way to be involved in the sport. Many diving clubs have masters teams in addition to their primary competitive ones, and while some masters dive only for fun and fitness, there are also masters competitions, which range from the local to world championship level. Divers can qualify to compete at the age group national championships, or junior national championships, in their age groups as assigned by FINA up to the age of 18. This competition is held annually in July. Qualification is based on achieving minimum scores at earlier competitions in the season, although athletes who place very highly at a national championship will be automatically qualified to compete at the next. Divers must qualify at two different competitions, at least one of which must be a level 1 competition, i.e. a competition with fairly strict judging patterns. Such competitions include the Polar Bear Invitational in Winnipeg, the Sting in Victoria, and the Alberta Provincial Championships in Edmonton or Calgary. The qualifying scores are determined by DPC according to the results of the preceding year's national competition, and typically do not have much variation from year to year. Divers older than 18, or advanced divers of younger ages, can qualify for the senior national championships, which are held twice each year, once roughly in March and once in June or July. Once again, qualification is based on achieving minimum scores at earlier competitions (in this case, within the 12 months preceding the national championships, and in an Open age group event), or high placements in previous national championships or international competitions. It is no longer the case that divers may use results from age group events to qualify for senior nationals, or results from Open events to qualify for age group nationals. In the Republic of Ireland facilities are limited to one pool at the National Aquatic Centre in Dublin. National championships take place late in the year, usually during November. The competition is held at the National Aquatic Centre in Dublin and consists of four events: In the United Kingdom, diving competitions on all boards run throughout the year. National Masters' Championships are held two or three times per year. In the United States, summer diving is usually limited to one meter diving at community or country club pools. Some pools organize to form intra-pool competitions. These competitions are usually designed to accommodate all school-age children. In the United States scholastic diving at the high school level is usually limited to one meter diving (but some schools use three meter springboards.). Scores from those one meter dives contribute to the swim team's overall score. High school diving and swimming concludes their season with a state competition. Depending on the state and the number of athletes competing in the state, certain qualifications must be achieved to compete in the state's championship meet. There are often regional championships and district championships which are necessary to compete in before reaching the state meet to narrow the field to only the most competitive athletes. Most state championship meets consist of eleven dives. The eleven dives are usually split up between two categories: five required (voluntary) dives and six optional dives. In the United States, pre-college divers interested in learning one and three meter or platform diving should consider a club sanctioned by either USA Diving or AAU Diving. In USA Diving, Future Champions is the entry level or novice diver category with 8 levels of competition. From Future Champions, divers graduate to "Junior Olympic", or JO. JO divers compete in age groups at inter-club competitions, at invitationals, and if qualified, at regional, zone and national competitions. Divers over the age of 19 years of age cannot compete in these events as a JO diver. USA Diving sanctions the Winter Nationals championship with one, three meter, and platform events. In the summer USA Diving sanctions the Summer Nationals including all three events with both Junior and Senior divers. USA Diving is sanctioned by the United States Olympic Committee to select team representatives for international diving competitions including the World Championships and Olympic Games. AAU Diving sanctions one national event per year in the summer. AAU competes on the one, three, and tower to determine the All-American team. In the United States scholastic diving at the college level requires one and three meter diving. Scores from the one and three meter competition contribute to the swim team's overall meet score. College divers interested in tower diving may compete in the NCAA separate from swim team events. NCAA Divisions II and III do not usually compete platform; if a diver wishes to compete platform in college, he or she must attend a Division I school. Each division also has rules on the number of dives in each competition. Division II schools compete with 10 dives in competition whereas Division III schools compete with 11. Division I schools only compete with 6 dives in competition. These 6 dives consist of either 5 optionals and 1 voluntary, or 6 optionals. If the meet is a 5 optional meet, then the divers will perform 1 optional from each category (Front, Back, Inward, Reverse, and Twister) and then 1 voluntary from the category of their choice. The voluntary in this type of meet is always worth a DD (Degree of Difficulty) of 2.0 even if the real DD is worth more or less on a DD sheet. In a 6 optional meet, the divers will yet again perform one dive from each category, but this time they will perform a 6th optional from the category of their choosing, which is worth its actual DD from the DD sheet. The highest level of collegiate competition is the NCAA Division 1 Swimming and Diving Championship. Events at the championship include 1 meter springboard, 3 meter springboard, and platform, as well as various swimming individual and relay events. The points scored by swimmers and divers are combined to determine a team swimming & diving champion. To qualify for a diving event at the NCAA championships, a competitor must first finish in the top three at one of five zone championships, which are held after the various conference championship meets. A diver who scores at least 310 points on the 3 meter springboard and 300 points on the 1 meter springboard in a 6 optional meet can participate in the particular zone championship corresponding to the geographic region in which his or her school lies. A number of colleges and universities offer scholarships to men and women who have competitive diving skills. These scholarships are usually offered to divers with age-group or club diving experience. The NCAA limits the number of years a college student can represent any school in competitions. The limit is four years, but could be less under certain circumstances. Divers who continue diving past their college years can compete in Masters' Diving programs. Masters' diving programs are frequently offered by college or club programs. Masters' Diving events are normally conducted in age-groups separated by five or ten years, and attract competitors of a wide range of ages and experience (many, indeed, are newcomers to the sport); the oldest competitor in a Masters' Diving Championship was Viola Krahn, who at the age of 101 was the first person in any sport, male or female, anywhere in the world, to compete in an age-group of 100+ years in a nationally organized competition. Diving is also popular as a non-competitive activity. Such diving usually emphasizes the airborne experience, and the height of the dive, but does not emphasize what goes on once the diver enters the water. The ability to dive underwater can be a useful emergency skill, and is an important part of watersport and navy safety training. Entering water from a height is an enjoyable leisure activity, as is underwater swimming. Such non-competitive diving can occur indoors and outdoors. Outdoor diving typically takes place from cliffs or other rock formations either into fresh or salt water. However, man-made diving platforms are sometimes constructed in popular swimming destinations. Outdoor diving requires knowledge of the water depth and currents as conditions can be dangerous. On occasion, the diver will inadvertently belly flop, entering the water horizontally or nearly so. The diver typically displaces a larger than usual amount of water. A recently developing section of the sport is "High Diving" (e.g. see 2013 World Aquatics Championships), conducted in open air locations, usually from improvised platforms up to high (as compared with as used in Olympic and World Championship events). Entry to the water is invariably feet-first to avoid the risk of injury that would be involved in head-first entry from that height. The final half-somersault is almost always performed backwards, enabling the diver to spot the entry point and control their rotation.
https://en.wikipedia.org/wiki?curid=8402
Dative case The dative case (abbreviated , or sometimes when it is a core argument) is a grammatical case used in some languages to indicate the recipient or beneficiary of an action, as in "Maria Jacobo potum dedit", Latin for "Maria gave Jacob a drink". In this example, the dative marks what would be considered the indirect object of a verb in English. Sometimes the dative has functions unrelated to giving. In Scottish Gaelic and Irish, the term "dative case" is used in traditional grammars to refer to the prepositional case-marking of nouns following simple prepositions and the definite article. In Georgian, the dative case also marks the subject of the sentence with some verbs and some tenses. This is called the dative construction. The dative was common among early Indo-European languages and has survived to the present in the Balto-Slavic branch and the Germanic branch, among others. It also exists in similar forms in several non-Indo-European languages, such as the Uralic family of languages. In some languages, the dative case has assimilated the functions of other, now extinct cases. In Ancient Greek, the dative has the functions of the Proto-Indo-European locative and instrumental as well as those of the original dative. Under the influence of English, which uses the preposition "to" for (among other uses) both indirect objects ("give to") and directions of movement ("go to"), the term "dative" has sometimes been used to describe cases that in other languages would more appropriately be called lative. "Dative" comes from Latin "cāsus datīvus" ("case for giving"), a translation of Greek δοτικὴ πτῶσις, "dotikē ptôsis" ("inflection for giving"), from its use with the verb "didónai" "to give". Dionysius Thrax in his Art of Grammar also refers to it as "epistaltikḗ" "for sending (a letter)", from the verb "epistéllō" "send to", a word from the same root as epistle. The Old English language, which continued in use until after the Norman Conquest of 1066, had a dative case; however, the English case system gradually fell into disuse during the Middle English period, when the accusative and dative of pronouns merged into a single oblique case that was also used with all prepositions. This conflation of case in Middle and Modern English has led most modern grammarians to discard the "accusative" and "dative" labels as obsolete in reference to English, often using the term "objective" for oblique. The dative case is rare in modern English usage, but it can be argued that it survives in a few set expressions. One example is the word "methinks", with the meaning "it seems to me". It survives in this fixed form from Old English (having undergone, however, phonetic changes with the rest of the language), in which it was constructed as "[it]" + "me" (the dative case of the personal pronoun) + "thinks" (i.e., "seems", < Old English þyncan, "to seem", a verb closely related to the verb þencan, "to think", but distinct from it in Old English; later it merged with "think" and lost this meaning). The modern objective case pronoun whom is derived from the dative case in Old English, specifically the Old English dative pronoun "hwām" (as opposed to the modern subjective "who", which descends from Old English "hwā") – though "whom" "also" absorbed the functions of the Old English accusative pronoun "hwone". It is also cognate to the word ""wem"" (the dative form of ""wer"") in German. The OED defines all classical uses of the word "whom" in situations where the indirect object "is not known" – in effect, indicating the anonymity of the indirect object. Likewise, some of the object forms of personal pronouns are remnants of Old English datives. For example, "him" goes back to the Old English dative "him" (accusative was "hine"), and "her" goes back to the dative "hire" (accusative was "hīe"). These pronouns are not datives in modern English; they are also used for functions previously indicated by the accusative. The indirect object of the verb is expressed between the verb and the direct object of the verb: "he gave me a book" or "he wrote me a poem." An indirect object can often be "re-worded" with a prepositional phrase using "to" or "for", but it is then no longer an indirect object. For example, "He gave a book to me" and "He wrote a poem for me" have the same meaning as the examples above, but are now "adverbial prepositional phrases". Of course it is not unusual that two "different grammatical structures" can describe the "same situation"; however referring to these "prepositional objects" mistakenly as "indirect objects" is a common error. In general, the dative (German: "Dativ") is used to mark the indirect object of a German sentence. For example: In English, the first sentence can be rendered as "I sent the book "to the man"" and as "I sent "the man" the book", where the indirect object is identified in English by standing in front of the direct object. The normal word order in German is to put the dative in front of the accusative (as in the example above). However, since the German dative is marked in form, it can also be put "after" the accusative: "Ich schickte das Buch dem Mann(e). The (e)" after "Mann" and "Kind" signifies a now largely archaic -e ending for certain nouns in the dative. It survives today almost exclusively in set phrases such as "zu Hause" (at home, "lit." to house), "im Zuge" (in the course of), and "am Tage" (during the day, "lit." at the day), as well as in occasional usage in formal prose, poetry, and song lyrics. Some masculine nouns (and one neuter noun, "Herz" [heart]), referred to as "weak nouns" or "n-nouns", take an -n or -en in the dative singular and plural. Many are masculine nouns ending in -e in the nominative (such as "Name" [name], "Beamte" [officer], and "Junge" [boy]), although not all such nouns follow this rule. Many also, whether or not they fall into the former category, refer to people, animals, professions, or titles; exceptions to this include the aforementioned "Herz" and "Name", as well as "Buchstabe" (letter), "Friede" (peace), "Obelisk" (obelisk), "Planet" (planet), and others. Certain German prepositions require the dative: "aus" (from), "außer" (out of), "bei" (at, near), "entgegen" (against), "gegenüber" (opposite), "mit" (with), "nach" (after, to), "seit" (since), "von" (from), and "zu" (at, in, to). Some other prepositions ("an" [at], "auf" [on], "entlang" [along], "hinter" [behind], "in" [in, into], "neben" (beside, next to), "über" [over, across], "unter" [under, below], "vor" [in front of], and "zwischen" [among, between]) may be used with dative (indicating current location), or accusative (indicating direction toward something). "Das Buch liegt auf dem Tisch(e)" (dative: The book is lying on the table), but "Ich lege das Buch auf den Tisch" (accusative: I put the book onto the table). In addition the four prepositions "[an]statt" (in place of), "trotz" (in spite of), "während" (during), and "wegen" (because of) which require the genitive in modern formal language, are most commonly used with the dative in colloquial German. For example, "because of the weather" is expressed as "wegen dem Wetter" instead of the formally correct "wegen des Wetters". Other prepositions requiring the genitive in formal language, are combined with "von" ("of") in colloquial style, e.g. "außerhalb vom Garten" instead of "außerhalb des Gartens" ("outside the garden"). Note that the concept of an indirect object may be rendered by a prepositional phrase. In this case, the noun's or pronoun's case is determined by the preposition, NOT by its function in the sentence. Consider this sentence: Here, the subject, "Ich", is in the nominative case, the direct object, "das Buch", is in the accusative case, and "zum Verleger" is in the dative case, since "zu" always requires the dative ("zum" is a contraction of "zu" + "dem"). However: In this sentence, "Freund" is the indirect object, but, because it follows "an" (direction), the accusative is required, not the dative. All of the articles change in the dative case. Some German verbs require the dative for their direct objects. Common examples are "antworten" (to answer), "danken" (to thank), "gefallen" (to please), "folgen" (to follow), "glauben" (to believe), "helfen" (to help), and "raten" (to advise). In each case, the direct object of the verb is rendered in the dative. For example: These verbs cannot be used in normal passive constructions, because German allows these only for verbs with accusative objects. It is therefore ungrammatical to say: *"Ich werde geholfen." "I am helped." Instead a special construction called "impersonal passive" must be used: "Mir wird geholfen", literally: "To me is helped." A colloquial (non-standard) and rarely used way to form the passive voice for dative verbs is the following: "Ich kriege geholfen", or: "Ich bekomme geholfen", literally: "I get helped". The use of the verb "to get" here reminds us that the dative case has something to do with giving and receiving. In German, help is not something you "perform on" somebody, but rather something you "offer" them. The dative case is also used with reflexive ("sich") verbs when specifying what part of the self the verb is being done to: Cf. the respective "accord" in French: "Les enfants se sont lavés" ("the children have washed themselves") vs. "Les enfants se sont lavé" [uninflected] "les mains" ("... their hands"). German can use two datives to make sentences like: "Sei mir meinem Sohn(e) gnädig!" "For my sake, have mercy on my son!" Literally: "Be for me to my son merciful." The first dative "mir" ("for me") expresses the speaker's commiseration (much like the "dativus ethicus" in Latin, see below). The second dative "meinem Sohn(e)" ("to my son") names the actual object of the plea. Mercy is to be given "to" the son "for" or "on behalf of" his mother/father. Adjective endings also change in the dative case. There are three inflection possibilities depending on what precedes the adjective. They most commonly use "weak inflection" when preceded by a definite article (the), "mixed inflection" after an indefinite article (a/an), and "strong inflection" when a quantity is indicated (many green apples). There are several uses for the dative case ("Dativus"): In addition to its main function as the "dativus", the dative case has other functions in Classical Greek: (The chart below uses the Latin names for the types of dative; the Greek name for the dative is δοτική πτῶσις, like its Latin equivalent, derived from the verb "to give"; in Ancient Greek, δίδωμι.) The articles in the Greek dative are The dative case, strictly speaking, no longer exists in Modern Greek, except in fossilized expressions like δόξα τω Θεώ (from the ecclesiastical τῷ Θεῷ δόξα, "Glory to God") or εν τάξει (ἑν τάξει, lit. "in order", i.e. "all right" or "OK"). Otherwise, most of the functions of the dative have been subsumed in the accusative. In Russian, the dative case is used for indicating the indirect object of an action (that to which something is given, thrown, read, etc.). In the instance where a person is the goal of motion, dative is used instead of accusative to indicate motion toward. This is usually achieved with the preposition "κ" + destination in dative case; "К врачу", meaning "to the doctor." Dative is also the necessary case taken by certain prepositions when expressing certain ideas. For instance, when the preposition "по" is used to mean "along," its object is always in dative case, as in "По бокам", meaning "along the sides." Other Slavic languages apply the dative case (and the other cases) more or less the same way as does Russian; some languages may use the dative in other ways. The following examples are from Polish: Some other kinds of dative use as found in the Serbo-Croatian language are: "Dativus finalis" (Titaniku u pomoć "to Titanic's rescue"), "Dativus commodi/incommodi" (Operi svojoj majci suđe "Wash the dishes for your mother"), "Dativus possessivus" (Ovcama je dlaka gusta "Sheep's hair is thick"), "Dativus ethicus" (Šta mi radi Boni? "What is Boni doing? (I am especially interested in what it is)") and Dativus auctoris (Izgleda mi okej "It seems okay to me"). Unusual in other Indo-European branches but common among Slavic languages, endings of nouns and adjectives are different based on grammatical function. Other factors are gender and number. In some cases, the ending may not be obvious, even when those three factors (function, gender, number) are considered. For example, in Polish, 'syn' ("son") and 'ojciec' ("father") are both masculine singular nouns, yet appear as "syn → synowi and "ojciec → ojcu in the dative. Both Lithuanian and Latvian have a distinct dative case in the system of nominal declensions. Lithuanian nouns preserve Indo-European inflections in the dative case fairly well: (o-stems) vaikas -> sg. vaikui, pl. vaikams; (ā-stems) ranka -> sg. rankai, pl. rankoms; (i-stems) viltis -> sg. vilčiai, pl. viltims; (u-stems) sūnus -> sg. sūnui, pl. sūnums; (consonant stems) vanduo -> sg. vandeniui, pl. vandenims. Adjectives in the dative case receive pronominal endings (this might be the result of a more recent development): tas geras vaikas -> sg. tam geram vaikui, pl. tiems geriems vaikams. The dative case in Latvian underwent further simplifications – the original masculine endings of "both" nouns and adjectives have been replaced with pronominal inflections: tas vīrs -> sg. tam vīram, pl. vīriem. Also, the final "s" in all Dative forms has been dropped. The only exception is personal pronouns in the plural: mums (to us), jums (to you). Note that in colloquial Lithuanian the final "s" in the dative is often omitted, as well: time geriem vaikam. In both Latvian and Lithuanian, the main function of the dative case is to render the indirect object in a sentence: (lt) aš duodu vyrui knygą; (lv) es dodu [duodu] vīram grāmatu – "I am giving a book to the man". The dative case can also be used with gerundives to indicate an action preceding or simultaneous with the main action in a sentence: (lt) jam įėjus, visi atsistojo – "when he walked in, everybody stood up", lit. "to him having walked in, all stood up"; (lt) jai miegant, visi dirbo – "while she slept, everybody was working", lit. "to her sleeping, all were working". In modern standard Lithuanian, Dative case is not required by prepositions, although in many dialects it is done frequently: (dial.) iki (+D) šiai dienai, (stand.) iki (+G) šios dienos – "up until this day". In Latvian, the dative case is taken by several prepositions in the singular and all prepositions in the plural (due to peculiar historical changes): sg. bez (+G) tevis "(without thee)" ~ pl. bez (+D) jums "(without you)"; sg. pa (+A) ceļu "(along the road)" ~ pl. pa (+D) ceļiem "(along the roads)". In modern Eastern Armenian, the dative is attained by adding any article to the genitive: "dog" = շուն GEN > շան "(of the dog; dog's)" with no articles DAT > շանը or շանն "(to the dog)" with definite articles (-ն if preceding a vowel) DAT > մի շան "(to a dog)" with indefinite article DAT > շանս "(to my dog)" with 1st person possessive article DAT > շանդ "(to your dog)" with 2nd person possessive article There is a general tendency to view -ին as the standard dative suffix, but only because that is its most productive (and therefore common) form. The suffix -ին as a dative marker is nothing but the standard, most common, genitive suffix -ի accompanied by the definite article -ն. But the dative case encompasses indefinite objects as well, which will not be marked by -ին: Definite DAT > Ես գիրքը տվեցի տղային: "(I gave the book to the boy)" Indefinite DAT> Ես գիրքը տվեցի մի տղայի: "(I gave the book to a boy)" The main function of the dative marking in Armenian is to indicate the receiving end of an action, more commonly the indirect object which in English is preceded by the preposition "to". In the use of "giving" verbs like "give, donate, offer, deliver, sell, bring..." the dative marks the recipient. With communicative verbs like "tell, say, advise, explain, ask, answer..." the dative marks the listener. Other verbs whose indirect objects are marked by the dative case in Armenian are "show, reach, look, approach..." Eastern Armenian also uses the dative case to mark the time of an event, in the same way English uses the preposition "at", as in "Meet me at nine o' clock." The dative case is known as the "fourth case" (chaturthi-vibhakti) in the usual procedure in the declension of nouns. Its use is mainly for the indirect object. As with many other languages, the dative case is used in Hungarian to show the indirect object of a verb. For example, "Dánielnek adtam ezt a könyvet" (I gave this book to Dániel). It has two suffixes, "-nak" and "-nek"; the correct one is selected by vowel harmony. The personal dative pronouns follow the "-nek" version: "nekem", "neked", etc. This case is also used to express "for" in certain circumstances, such as "I bought a gift for Mother". In possessive constructions the nak/nek endings are also used but this is not the dative form (rather, the attributive or possessive case) Finnish does not have a separate dative case. However, the allative case can fulfill essentially the same role as dative, beyond its primary meaning of directional movement (that is, going somewhere or approaching someone). For example: "He lahjoittivat kaikki rahansa köyhille (They donated all their money to the poor.) In the Northeast Caucasian languages, such as Tsez, the dative also takes the functions of the lative case in marking the direction of an action. By some linguists, they are still regarded as two separate cases in those languages, although the suffixes are exactly the same for both cases. Other linguists list them separately only for the purpose of separating syntactic cases from locative cases. An example with the ditransitive verb "show" (literally: "make see") is given below: The dative/lative is also used to indicate possession, as in the example below, because there is no such verb as "to have". As in the examples above, the dative/lative case usually occurs in combination with another suffix as poss-lative case; this should not be regarded as a separate case, however, as many of the locative cases in Tsez are constructed analytically; hence, they are, in fact, a combination of two case suffixes. See Tsez language#Locative case suffixes for further details. Verbs of perception or emotion (like "see", "know", "love", "want") also require the logical subject to stand in the dative/lative case. Note that in this example the "pure" dative/lative without its POSS-suffix is used. The dative case ("yönelme durumu") in Turkish language is formed by adding the "-e" or "-a" suffixes to the end of the noun, in accordance with the effected noun's vowel harmony. The word that should be in the dative case can be found as an answer to the questions 'neye?' (to what?), 'kime?' (to whom?) and 'nereye?' (to where?) will lead to find a dative case in a sentence. There are many different uses for the dative case. The dative also is for objects, usually indirect objects, but sometimes objects that in English would be considered direct: The dative case tells "whither", that is, the place "to which". Thus it has roughly the meaning of the English prepositions "to" and "into", and also "in" when it can be replaced with "into":
https://en.wikipedia.org/wiki?curid=8406
Dodecahedron In geometry, a dodecahedron (Greek , from "dōdeka" "twelve" + "hédra" "base", "seat" or "face") is any polyhedron with twelve flat faces. The most familiar dodecahedron is the regular dodecahedron, which is a Platonic solid. There are also three regular star dodecahedra, which are constructed as stellations of the convex form. All of these have icosahedral symmetry, order 120. The pyritohedron, a common crystal form in pyrite, is an irregular pentagonal dodecahedron, having the same topology (in terms of its vertices as a graph) as the regular one but pyritohedral symmetry while the tetartoid has tetrahedral symmetry. The rhombic dodecahedron, seen as a limiting case of the pyritohedron, has octahedral symmetry. The elongated dodecahedron and trapezo-rhombic dodecahedron variations, along with the rhombic dodecahedra, are space-filling. There are numerous other dodecahedra. The convex regular dodecahedron is one of the five regular Platonic solids and can be represented by its Schläfli symbol {5, 3}. The dual polyhedron is the regular icosahedron {3, 5}, having five equilateral triangles around each vertex. The convex regular dodecahedron also has three stellations, all of which are regular star dodecahedra. They form three of the four Kepler–Poinsot polyhedra. They are the small stellated dodecahedron {5/2, 5}, the great dodecahedron {5, 5/2}, and the great stellated dodecahedron {5/2, 3}. The small stellated dodecahedron and great dodecahedron are dual to each other; the great stellated dodecahedron is dual to the great icosahedron {3, 5/2}. All of these regular star dodecahedra have regular pentagonal or pentagrammic faces. The convex regular dodecahedron and great stellated dodecahedron are different realisations of the same abstract regular polyhedron; the small stellated dodecahedron and great dodecahedron are different realisations of another abstract regular polyhedron. In crystallography, two important dodecahedra can occur as crystal forms in some symmetry classes of the cubic crystal system that are topologically equivalent to the regular dodecahedron but less symmetrical: the pyritohedron with pyritohedral symmetry, and the tetartoid with tetrahedral symmetry: A pyritohedron is a dodecahedron with pyritohedral (Th) symmetry. Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices (see figure). However, the pentagons are not constrained to be regular, and the underlying atomic arrangement has no true fivefold symmetry axis. Its 30 edges are divided into two sets – containing 24 and 6 edges of the same length. The only axes of rotational symmetry are three mutually perpendicular twofold axes and four threefold axes. Although regular dodecahedra do not exist in crystals, the pyritohedron form occurs in the crystals of the mineral pyrite, and it may be an inspiration for the discovery of the regular Platonic solid form. The true regular dodecahedron can occur as a shape for quasicrystals (such as holmium–magnesium–zinc quasicrystal) with icosahedral symmetry, which includes true fivefold rotation axes. Its name comes from one of the two common crystal habits shown by pyrite, the other one being the cube. In pyritohedral pyrite, the faces have a Miller index of (210), which means that the dihedral angle is 2·arctan(2) ≈ 126.87° and each pentagonal face has one angle of approximately 121.6° in between two angles of approximately 106.6° and opposite two angles of approximately 102.6°. In a perfect crystal, the measurements of an ideal face would be: These ideal proportions are rarely found in nature. If the eight vertices of a cube have coordinates of: Then a pyritohedron has 12 additional vertices: where "h" is the height of the wedge-shaped "roof" above the faces of the cube. When "h" = 1, the six cross-edges degenerate to points and the pyritohedron reduces to a rhombic dodecahedron. When "h" = 0, the cross-edges are absorbed in the facets of the cube, and the pyritohedron reduces to a cube. When "h" = , the multiplicative inverse of the golden ratio, the result is a regular dodecahedron. When "h" = , the conjugate of this value, the result is a regular great stellated dodecahedron. For natural pyrite, "h" = . A reflected pyritohedron is made by swapping the nonzero coordinates above. The two pyritohedra can be superimposed to give the compound of two dodecahedra. The image to the left shows the case where the pyritohedra are convex regular dodecahedra. The pyritohedron has a geometric degree of freedom with limiting cases of a cubic convex hull at one limit of collinear edges, and a rhombic dodecahedron as the other limit as 6 edges are degenerated to length zero. The regular dodecahedron represents a special intermediate case where all edges and angles are equal. It is possible to go past these limiting cases, creating concave or nonconvex pyritohedra. The "endo-dodecahedron" is concave and equilateral; it can tessellate space with the convex regular dodecahedron. Continuing from there in that direction, we pass through a degenerate case where twelve vertices coincide in the centre, and on to the regular great stellated dodecahedron where all edges and angles are equal again, and the faces have been distorted into regular pentagrams. On the other side, past the rhombic dodecahedron, we get a nonconvex equilateral dodecahedron with fish-shaped self-intersecting equilateral pentagonal faces. A tetartoid (also tetragonal pentagonal dodecahedron, pentagon-tritetrahedron, and tetrahedric pentagon dodecahedron) is a dodecahedron with chiral tetrahedral symmetry (T). Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices. However, the pentagons are not regular and the figure has no fivefold symmetry axes. Although regular dodecahedra do not exist in crystals, the tetartoid form does. The name tetartoid comes from the Greek root for one-fourth because it has one fourth of full octahedral symmetry, and half of pyritohedral symmetry. The mineral cobaltite can have this symmetry form. Its topology can be as a cube with square faces bisected into 2 rectangles like the pyritohedron, and then the bisection lines are slanted retaining 3-fold rotation at the 8 corners. The following points are vertices of a tetartoid pentagon under tetrahedral symmetry: under the following conditions: It can be seen as a tetrahedron, with edges divided into 3 segments, along with a center point of each triangular face. In Conway polyhedron notation it can be seen as gT, a gyro tetrahedron. A lower symmetry form of the regular dodecahedron can be constructed as the dual of a polyhedra constructed from two triangular anticupola connected base-to-base, called a "triangular gyrobianticupola." It has D3d symmetry, order 12. It has 2 sets of 3 identical pentagons on the top and bottom, connected 6 pentagons around the sides which alternate upwards and downwards. This form has a hexagonal cross-section and identical copies can be connected as a partial hexagonal honeycomb, but all vertices will not match. The "rhombic dodecahedron" is a zonohedron with twelve rhombic faces and octahedral symmetry. It is dual to the quasiregular cuboctahedron (an Archimedean solid) and occurs in nature as a crystal form. The rhombic dodecahedron packs together to fill space. The "rhombic dodecahedron" can be seen as a degenerate pyritohedron where the 6 special edges have been reduced to zero length, reducing the pentagons into rhombic faces. The rhombic dodecahedron has several stellations, the first of which is also a parallelohedral spacefiller. Another important rhombic dodecahedron, the Bilinski dodecahedron, has twelve faces congruent to those of the rhombic triacontahedron, i.e. the diagonals are in the ratio of the golden ratio. It is also a zonohedron and was described by Bilinski in 1960. This figure is another spacefiller, and can also occur in non-periodic spacefillings along with the rhombic triacontahedron, the rhombic icosahedron and rhombic hexahedra. There are 6,384,634 topologically distinct "convex" dodecahedra, excluding mirror images—the number of vertices ranges from 8 to 20. (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.) Topologically distinct dodecahedra (excluding pentagonal and rhombic forms) Armand Spitz used a dodecahedron as the "globe" equivalent for his Digital Dome planetarium projector. based upon a suggestion from Albert Einstein.
https://en.wikipedia.org/wiki?curid=8407
Darwin, Northern Territory Darwin ( ) is the capital city of the Northern Territory of Australia, situated on the Timor Sea. It is the largest city in the sparsely populated Northern Territory, with a population of 148,564. It is the smallest, wettest, and most northerly of the Australian capital cities and acts as the Top End's regional centre. Darwin's proximity to Southeast Asia makes it a link between Australia and countries such as Indonesia and East Timor. The Stuart Highway begins in Darwin, extends southerly across central Australia through Tennant Creek and Alice Springs, concluding in Port Augusta, South Australia. The city is built upon a low bluff overlooking the harbour. Its suburbs begin at Lee Point in the north and stretch to Berrimah in the east. Past Berrimah, the Stuart Highway goes on to Darwin's satellite city Palmerston and its suburbs. The Darwin region, like much of the Top End, experiences a tropical climate with a wet and dry season. A period known locally as "the build up" leading up to Darwin's wet season sees temperature and humidity increase. Darwin's wet season typically arrives in late November to early December and brings with it heavy monsoonal downpours, spectacular lightning displays, and increased cyclone activity. During the dry season, the city has clear skies and mild sea breezes from the harbour. The greater Darwin area is the ancestral home of the Larrakia people. On 9 September 1839, sailed into Darwin harbour during its survey of the area. John Clements Wickham named the region "Port Darwin" in honour of their former shipmate Charles Darwin, who had sailed with them on the ship's previous voyage, which ended in October 1836. The settlement there became the town of Palmerston in 1869, but it was renamed Darwin in 1911. The city has been almost entirely rebuilt four times, following devastation caused by the 1897 cyclone, the 1937 cyclone, Japanese air raids during World War II, and Cyclone Tracy in 1974. The Aboriginal people of the Larrakia language group are the traditional custodians and the first inhabitants of the greater Darwin area. They had trading routes with Southeast Asia (see Macassan contact with Australia) and imported goods from as far afield as South and Western Australia. Established songlines penetrated throughout the country, allowing stories and histories to be told and retold along the routes. The extent of shared songlines and history of multiple clan groups within this area is contestable. The Dutch visited Australia's northern coastline in the 1600s and landed on the Tiwi Islands only to be repelled by the Tiwi peoples. The Dutch created the first European maps of the area. This accounts for the Dutch names in the area, such as Arnhem Land and Groote Eylandt. The first British person to see Darwin harbour appears to have been Lieutenant John Lort Stokes of on 9 September 1839. The ship's captain, Commander John Clements Wickham, named the port after Charles Darwin, the British naturalist who had sailed with them both on the earlier second expedition of the "Beagle". In 1863, the Northern Territory was transferred from New South Wales to South Australia. In 1864 South Australia sent B. T. Finniss north as Government Resident to survey and found a capital for its new territory. Finniss chose a site at Escape Cliffs, near the entrance to Adelaide River, about northeast of the modern city. This attempt was short-lived, however, and the settlement abandoned by 1865. On 5 February 1869, George Goyder, the Surveyor-General of South Australia, established a small settlement of 135 people at Port Darwin between Fort Hill and the escarpment. Goyder named the settlement Palmerston after the British Prime Minister Lord Palmerston. In 1870, the first poles for the Overland Telegraph were erected in Darwin, connecting Australia to the rest of the world. The discovery of gold by employees of the Australian Overland Telegraph Line digging holes for telegraph poles at Pine Creek in the 1880s spawned a gold rush, which further boosted the young colony's development. In February 1872 the brigantine "Alexandra" was the first private vessel to sail from an English port directly to Darwin, carrying people many of whom were coming to recent gold finds. In early 1875 Darwin's white population had grown to approximately 300 because of the gold rush. On 17 February 1875 the left Darwin "en route" for Adelaide. The approximately 88 passengers and 34 crew (surviving records vary) included government officials, circuit-court judges, Darwin residents taking their first furlough, and miners. While travelling south along the north Queensland coast, the "Gothenburg" encountered a cyclone-strength storm and was wrecked on a section of the Great Barrier Reef. Only 22 men survived, while between 98 and 112 people perished. Many passengers who perished were Darwin residents and news of the tragedy severely affected the small community, which reportedly took several years to recover. In the 1870s, relatively large numbers of Chinese settled at least temporarily in the Northern Territory; many were contracted to work the goldfields and later to build the Palmerston to Pine Creek railway. By 1888 there were 6122 Chinese in the Northern Territory, mostly in or around Darwin. The early Chinese settlers were mainly from the Guangdong Province in south China. However, at the end of the nineteenth century anti-Chinese feelings grew in response to the 1890s economic depression, and the White Australia policy meant many Chinese left the territory. However, some families stayed, became Australian citizens, and established a commercial base in Darwin. The Northern Territory was initially settled and administered by South Australia, until its transfer to the Commonwealth in 1911. In the same year, the city's official name changed from Palmerston to Darwin. The period between 1911 and 1919 was filled with political turmoil, particularly with trade union unrest, which culminated on 17 December 1918. Led by Harold Nelson, some 1000 demonstrators marched to Government House at Liberty Square in Darwin where they burnt an effigy of the Administrator of the Northern Territory John Gilruth and demanded his resignation. The incident became known as the Darwin Rebellion. Their grievances were against the two main Northern Territory employers: Vestey's Meatworks and the federal government. Both Gilruth and the Vestey company left Darwin soon afterwards. Around 10,000 Australian and other Allied troops arrived in Darwin at the outset of World War II, to defend Australia's northern coastline. On 19 February 1942 at 0957, 188 Japanese warplanes attacked Darwin in two waves. It was the same fleet that had bombed Pearl Harbor, though a considerably larger number of bombs were dropped on Darwin than on Pearl Harbor. The attack killed at least 243 people and caused immense damage to the town, airfields, and aircraft. These were by far the most serious attacks on Australia in time of war, in terms of fatalities and damage. They were the first of many raids on Darwin. Darwin was further developed after the war, with sealed roads constructed connecting the region to Alice Springs to the south and Mount Isa to the south-east, and Manton Dam built in the south to provide the city with water. On Australia Day (26 January) 1959, Darwin was granted city status. On 25 December 1974, Darwin was struck by Cyclone Tracy, which killed 71 people and destroyed over 70% of the city's buildings, including many old stone buildings such as the Palmerston Town Hall, which could not withstand the lateral forces generated by the strong winds. After the disaster, 30,000 people of the population of 46,000 were evacuated, in what turned out to be the biggest airlift in Australia's history. The town was subsequently rebuilt with newer materials and techniques during the late 1970s by the Darwin Reconstruction Commission, led by former Brisbane Lord mayor Clem Jones. A satellite city of Palmerston was built east of Darwin in the early 1980s. On 17 September 2003 the Adelaide–Darwin railway was completed, with the opening of the Alice Springs-Darwin standard-gauge line. Darwin has played host to many of aviation's early pioneers. On 10 December 1919 Captain Ross Smith and his crew landed in Darwin and won a £10,000 Prize from the Australian Government for completing the first flight from London to Australia in under thirty days. Smith and his Crew flew a Vickers Vimy, G-EAOU, and landed on an airstrip that has now become Ross Smith Avenue. Other aviation pioneers include Amy Johnson, Amelia Earhart, Sir Charles Kingsford Smith and Bert Hinkler. The original QANTAS Empire Airways Ltd Hangar, a registered heritage site, was part of the original Darwin Civil Aerodrome in Parap and is now a museum and still bears scars from the bombing of Darwin during World War II. Darwin was home to Australian and US pilots during the war, with airstrips built in and around Darwin. Today Darwin provides a staging ground for military exercises. Darwin was a compulsory stopover and checkpoint in the London-to-Melbourne Centenary Air Race in 1934. The official name of the race was the MacRobertson Air Race. Winners of the race were Tom Campbell Black and C. W. A. Scott. The following is an excerpt from "Time" magazine, 29 October 1934: The Australian Aviation Heritage Centre is approximately from the city centre on the Stuart Highway and is one of only two places outside the United States where a B-52 bomber (on permanent loan from the United States Air Force) is on public display.
https://en.wikipedia.org/wiki?curid=8408
Dictator A dictator is a political leader who possesses absolute power. A dictatorship is a state ruled by one dictator or by a small clique. The word originated as the title of a magistrate in the Roman Republic appointed by the Senate to rule the republic in times of emergency (see Roman dictator and "justitium"). Like the term "tyrant" (which was originally a non-pejorative Ancient Greek title), and to a lesser degree "autocrat", "dictator" came to be used almost exclusively as a non-titular term for oppressive rule. Thus, in modern usage, the term "dictator" is generally used to describe a leader who holds or abuses an extraordinary amount of personal power. Dictatorships are often characterised by some of the following: suspension of elections and civil liberties; proclamation of a state of emergency; rule by decree; repression of political opponents; not abiding by the rule of law procedures, and cult of personality. Dictatorships are often one-party or dominant-party states. A wide variety of leaders coming to power in different kinds of regimes, such as military juntas, one-party states, dominant-party states, and civilian governments under a personal rule, have been described as dictators. They may hold left or right-wing views, or may be apolitical. Originally an emergency legal appointment in the Roman Republic and the Etruscan culture, the term "Dictator" did not have the negative meaning it has now. A Dictator was a magistrate given sole power for a limited duration. At the end of the term, the Dictator's power was returned to normal Consular rule whereupon a dictator provided accountability, though not all dictators accepted a return to power sharing. The term started to get its modern negative meaning with Cornelius Sulla's ascension to the dictatorship following Sulla's second civil war, making himself the first Dictator in Rome in more than a century (during which the office was ostensibly abolished) as well as "de facto" eliminating the time limit and need of senatorial acclamation. He avoided a major constitutional crisis by resigning the office after about one year, dying a few years later. Julius Caesar followed Sulla's example in 49 BC and in February 44 BC was proclaimed "Dictator perpetuo", "Dictator in perpetuity", officially doing away with any limitations on his power, which he kept until his assassination the following month. Following Julius' assassination, his heir Augustus was offered the title of dictator, but he declined it. Later successors also declined the title of dictator, and usage of the title soon diminished among Roman rulers. As late as the second half of the 19th century, the term "dictator" had occasional positive implications. For example, during the Hungarian Revolution of 1848, the national leader Lajos Kossuth was often referred to as dictator, without any negative connotations, by his supporters and detractors alike, although his official title was that of regent-president. When creating a provisional executive in Sicily during the Expedition of the Thousand in 1860, Giuseppe Garibaldi officially assumed the title of "Dictator" (see Dictatorship of Garibaldi). Shortly afterwards, during the 1863 January Uprising in Poland, "Dictator" was also the official title of four leaders, the first being Ludwik Mierosławski. Past that time, however, the term "dictator" assumed an invariably negative connotation. In popular usage, a "dictatorship" is often associated with brutality and oppression. As a result, it is often also used as a term of abuse against political opponents. The term has also come to be associated with megalomania. Many dictators create a cult of personality around themselves and they have also come to grant themselves increasingly grandiloquent titles and honours. For instance, Idi Amin Dada, who had been a British army lieutenant prior to Uganda's independence from Britain in October 1962, subsequently styled himself ""His Excellency, President for Life, Field Marshal Al Hadji Doctor Idi Amin Dada, VC, DSO, MC, Conqueror of the British Empire in Africa in General and Uganda in Particular"". In the movie "The Great Dictator" (1940), Charlie Chaplin satirized not only Adolf Hitler but the institution of dictatorship itself. A benevolent dictatorship refers to a government in which an authoritarian leader exercises absolute political power over the state but is perceived to do so with the regard for benefit of the population as a whole, standing in contrast to the decidedly malevolent stereotype of a dictator. A benevolent dictator may allow for some economic liberalization or democratic decision-making to exist, such as through public referenda or elected representatives with limited power, and often makes preparations for a transition to genuine democracy during or after their term. It might be seen as a republic a form of enlightened despotism. The label has been applied to leaders such as Ioannis Metaxas of Greece (1936–41), Josip Broz Tito of Yugoslavia (1953–80), and Lee Kuan Yew of Singapore (1959–90). The association between a dictator and the military is a common one; many dictators take great pains to emphasize their connections with the military and they often wear military uniforms. In some cases, this is perfectly legitimate; Francisco Franco was a lieutenant general in the Spanish Army before he became Chief of State of Spain; Manuel Noriega was officially commander of the Panamanian Defense Forces. In other cases, the association is mere pretense. Some dictators have been masters of crowd manipulation, such as Mussolini and Hitler. Others were more prosaic speakers, such as Stalin and Franco. Typically the dictator's people seize control of all media, censor or destroy the opposition, and give strong doses of propaganda daily, often built around a cult of personality. Mussolini and Hitler used similar, modest titles referring to them as "the Leader". Mussolini used "Il Duce" and Hitler was generally referred to as "der Führer". Franco used a similar title "El Caudillo" ("the Head") and for Stalin his adopted name became synonyms with his role as the absolute leader. For Mussolini, Hitler, and Franco, the use of modest, non-traditional titles displayed their absolute power even stronger as they did not need any, not even a historic legitimacy either. Because of its negative and pejorative connotations, modern authoritarian leaders very rarely (if ever) use the term "dictator" in their formal titles, instead they most often simply have title of president. In the 19th century, however, its official usage was more common: Russia during their Civil War Over time, dictators have been known to use tactics that violate human rights. For example, under the Soviet dictator Joseph Stalin, government policy was enforced by extrajudicial killings, secret police and the notorious Gulag system of concentration camps. Most Gulag inmates were not political prisoners, although significant numbers of political prisoners could be found in the camps at any one time. Data collected from Soviet archives gives the death toll from Gulags at 1,053,829. Other human rights abuses by the Soviet state included human experimentation, the use of psychiatry as a political weapon and the denial of freedoms of religion, assembly, speech and association. Pol Pot became dictator of Cambodia in 1975. In all, an estimated 1.7 million people (out of a population of 7 million) died due to the policies of his four-year dictatorship. As a result, Pol Pot is sometimes described as "the Hitler of Cambodia" and "a genocidal tyrant". The International Criminal Court issued an arrest warrant for Sudan's military dictator Omar al-Bashir over alleged war crimes in Darfur. In social choice theory, the notion of a dictator is formally defined as a person who can achieve any feasible social outcome he/she wishes. The formal definition yields an interesting distinction between two different types of dictators. Note that these definitions disregard some alleged dictators who are not interested in the actual achieving of social goals, as much as in propaganda and controlling public opinion. Monarchs and military dictators are also excluded from these definitions, because their rule relies on the consent of other political powers (the nobility or the army).
https://en.wikipedia.org/wiki?curid=8409
Decibel The decibel (symbol: dB) is a relative unit of measurement corresponding to one tenth of a bel. It is used to express the ratio of one value of a power or field quantity to another, on a logarithmic scale, the logarithmic quantity being called the power level or field level, respectively. It can be used to express a change in value (e.g., +1 dB or −1 dB) or an absolute value. In the latter case, it expresses the ratio of a value to a fixed reference value; when used in this way, a suffix that indicates the reference value is often appended to the decibel symbol. For example, if the reference value is 1 volt, then the suffix is "V" (e.g., "20 dBV"), and if the reference value is one milliwatt, then the suffix is "m" (e.g., "20 dBm"). Two different scales are used when expressing a ratio in decibels, depending on the nature of the quantities: power and field (root-power). When expressing a power ratio, the number of decibels is ten times its logarithm to base 10. That is, a change in "power" by a factor of 10 corresponds to a 10 dB change in level. When expressing field (root-power) quantities, a change in "amplitude" by a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two so that the related power and field levels change by the same number of decibels with linear loads. The definition of the decibel is based on the measurement of power in telephony of the early 20th century in the Bell System in the United States. One decibel is one tenth (deci-) of one bel, named in honor of Alexander Graham Bell; however, the bel is seldom used. Today, the decibel is used for a wide variety of measurements in science and engineering, most prominently in acoustics, electronics, and control theory. In electronics, the gains of amplifiers, attenuation of signals, and signal-to-noise ratios are often expressed in decibels. In the International System of Quantities, the decibel is defined as a unit of measurement for quantities of type level or level difference, which are defined as the logarithm of the ratio of power- or field-type quantities. The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was originally "Miles of Standard Cable" (MSC). 1 MSC corresponded to the loss of power over a 1 mile (approximately 1.6 km) length of standard telephone cable at a frequency of 5000 radians per second (795.8 Hz), and matched closely the smallest attenuation detectable to the average listener. The standard telephone cable implied was "a cable having uniformly distributed resistance of 88 Ohms per loop-mile and uniformly distributed shunt capacitance of 0.054 microfarads per mile" (approximately corresponding to 19 gauge wire). In 1924, Bell Telephone Laboratories received favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the "Transmission Unit" (TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power. The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the "bel", in honor of the telecommunications pioneer Alexander Graham Bell. The bel is seldom used, as the decibel was the proposed working unit. The naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931: In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, and suggested the name "logit" for "standard magnitudes which combine by multiplication", to contrast with the name "unit" for "standard magnitudes which combine by addition". In April 2003, the International Committee for Weights and Measures (CIPM) considered a recommendation for the inclusion of the decibel in the International System of Units (SI), but decided against the proposal. However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO). The IEC permits the use of the decibel with field quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios. The term "field quantity" is deprecated by ISO 80000-1, which favors root-power. In spite of their widespread use, suffixes (such as in dBA or dBV) are not recognized by the IEC or ISO. ISO 80000-3 describes definitions for quantities and units of space and time. The ISO Standard 80000-3:2006 defines the following quantities. The decibel (dB) is one-tenth of a bel: . The bel (B) is  ln(10) nepers: . The neper is the change in the level of a field quantity when the field quantity changes by a factor of "e", that is , thereby relating all of the units as nondimensional natural log of field-quantity ratios, . Finally, the level of a quantity is the logarithm of the ratio of the value of that quantity to a reference value of the same kind of quantity. Therefore, the bel represents the logarithm of a ratio between two power quantities of 10:1, or the logarithm of a ratio between two field quantities of :1. Two signals whose levels differ by one decibel have a power ratio of 101/10, which is approximately 1.25893, and an amplitude (field quantity) ratio of 10 (1.12202). The bel is rarely used either without a prefix or with SI unit prefixes other than "deci"; it is preferred, for example, to use "hundredths of a decibel" rather than "millibels". Thus, five one-thousandths of a bel would normally be written '0.05 dB', and not '5 mB'. The method of expressing a ratio as a level in decibels depends on whether the measured property is a "power quantity" or a "root-power quantity"; see "Power, root-power, and field quantities" for details. When referring to measurements of "power" quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to reference value. Thus, the ratio of "P" (measured power) to "P"0 (reference power) is represented by "L""P", that ratio expressed in decibels, which is calculated using the formula: The base-10 logarithm of the ratio of the two power quantities is the number of bels. The number of decibels is ten times the number of bels (equivalently, a decibel is one-tenth of a bel). "P" and "P"0 must measure the same type of quantity, and have the same units before calculating the ratio. If in the above equation, then "L""P" = 0. If "P" is greater than "P"0 then "L""P" is positive; if "P" is less than "P"0 then "L""P" is negative. Rearranging the above equation gives the following formula for "P" in terms of "P"0 and "L""P": When referring to measurements of field quantities, it is usual to consider the ratio of the squares of "F" (measured field) and "F"0 (reference field). This is because in most applications power is proportional to the square of field, and historically their definitions were formulated to give the same value for relative ratios in such typical cases. Thus, the following definition is used: The formula may be rearranged to give Similarly, in electrical circuits, dissipated power is typically proportional to the square of voltage or current when the impedance is constant. Taking voltage as an example, this leads to the equation for power gain level "L""G": where "V"out is the root-mean-square (rms) output voltage, "V"in is the rms input voltage. A similar formula holds for current. The term "root-power quantity" is introduced by ISO Standard 80000-1:2009 as a substitute of "field quantity". The term "field quantity" is deprecated by that standard. Although power and field quantities are different quantities, their respective levels are historically measured in the same units, typically decibels. A factor of 2 is introduced to make "changes" in the respective levels match under restricted conditions such as when the medium is linear and the "same" waveform is under consideration with changes in amplitude, or the medium impedance is linear and independent of both frequency and time. This relies on the relationship holding. In a nonlinear system, this relationship does not hold by the definition of linearity. However, even in a linear system in which the power quantity is the product of two linearly related quantities (e.g. voltage and current), if the impedance is frequency- or time-dependent, this relationship does not hold in general, for example if the energy spectrum of the waveform changes. For differences in level, the required relationship is relaxed from that above to one of proportionality (i.e., the reference quantities "P" and "F" need not be related), or equivalently, must hold to allow the power level difference to be equal to the field level difference from power "P" and "V" to "P" and "V". An example might be an amplifier with unity voltage gain independent of load and frequency driving a load with a frequency-dependent impedance: the relative voltage gain of the amplifier is always 0 dB, but the power gain depends on the changing spectral composition of the waveform being amplified. Frequency-dependent impedances may be analyzed by considering the quantities power spectral density and the associated field quantities via the Fourier transform, which allows elimination of the frequency dependence in the analysis by analyzing the system at each frequency independently. Since logarithm differences measured in these units often represent power ratios and field ratios, values for both are shown below. The bel is traditionally used as a unit of logarithmic power ratio, while the neper is used for logarithmic field (amplitude) ratio. The unit dBW is often used to denote a ratio for which the reference is 1 W, and similarly dBm for a reference point. , illustrating the consequence from the definitions above that "L""G" has the same value, 30 dB, regardless of whether it is obtained from powers or from amplitudes, provided that in the specific system being considered power ratios are equal to amplitude ratios squared. A change in power ratio by a factor of 10 corresponds to a change in level of . A change in power ratio by a factor of 2 or is approximately a change of 3 dB. More precisely, the change is ±3.0103 dB, but this is almost universally rounded to "3 dB" in technical writing. This implies an increase in voltage by a factor of . Likewise, a doubling or halving of the voltage, and quadrupling or quartering of the power, is commonly described as "6 dB" rather than ±6.0206 dB. Should it be necessary to make the distinction, the number of decibels is written with additional significant figures. 3.000 dB corresponds to a power ratio of 10, or 1.9953, about 0.24% different from exactly 2, and a voltage ratio of 1.4125, 0.12% different from exactly . Similarly, an increase of 6.000 dB corresponds to the power ratio is , about 0.5% different from 4. The decibel is useful for representing large ratios and for simplifying representation of multiplied effects such as attenuation from multiple sources along a signal chain. Its application in systems with additive effects is less intuitive. The logarithmic scale nature of the decibel means that a very large range of ratios can be represented by a convenient number, in a manner similar to scientific notation. This allows one to clearly visualize huge changes of some quantity. See "Bode plot" and "Semi-log plot". For example, 120 dB SPL may be clearer than "a trillion times more intense than the threshold of hearing". Level values in decibels can be added instead of multiplying the underlying power values, which means that the overall gain of a multi-component system, such as a series of amplifier stages, can be calculated by summing the gains in decibels of the individual components, rather than multiply the amplification factors; that is, = log("A") + log("B") + log("C"). Practically, this means that, armed only with the knowledge that 1 dB is a power gain of approximately 26%, 3 dB is approximately 2× power gain, and 10 dB is 10× power gain, it is possible to determine the power ratio of a system from the gain in dB with only simple addition and multiplication. For example: However, according to its critics, the decibel creates confusion, obscures reasoning, is more related to the era of slide rules than to modern digital processing, and is cumbersome and difficult to interpret. According to Mitschke, "The advantage of using a logarithmic measure is that in a transmission chain, there are many elements concatenated, and each has its own gain or attenuation. To obtain the total, addition of decibel values is much more convenient than multiplication of the individual factors." However, for the same reason that humans excel at additive operation over multiplication, decibels are awkward in inherently additive operations: "if two machines each individually produce a sound pressure level of, say, 90 dB at a certain point, then when both are operating together we should expect the combined sound pressure level to increase to 93 dB, but certainly not to 180 dB!"; "suppose that the noise from a machine is measured (including the contribution of background noise) and found to be 87 dBA but when the machine is switched off the background noise alone is measured as 83 dBA. [...] the machine noise [level (alone)] may be obtained by 'subtracting' the 83 dBA background noise from the combined level of 87 dBA; i.e., 84.8 dBA."; "in order to find a representative value of the sound level in a room a number of measurements are taken at different positions within the room, and an average value is calculated. [...] Compare the logarithmic and arithmetic averages of [...] 70 dB and 90 dB: logarithmic average = 87 dB; arithmetic average = 80 dB." Addition on a logarithmic scale is called logarithmic addition, and can be defined by taking exponentials to convert to a linear scale, adding there, and then taking logarithms to return. For example, where operations on decibels are logarithmic addition/subtraction and logarithmic multiplication/division, while operations on the linear scale are the usual operations: Note that the logarithmic mean is obtained from the logarithmic sum by subtracting formula_14, since logarithmic division is linear subtraction. Quantities in decibels are not necessarily additive, thus being "of unacceptable form for use in dimensional analysis". The human perception of the intensity of sound and light approximates the logarithm of intensity rather than a linear relationship (Weber–Fechner law), making the dB scale a useful measure. The decibel is commonly used in acoustics as a unit of sound pressure level. The reference pressure for sound in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure. Sound pressure is a field quantity, therefore the field version of the unit definition is used: where "p"rms is the root mean square of the measured sound pressure and "p"ref is the standard reference sound pressure of 20 micropascals in air or 1 micropascal in water. Use of the decibel in underwater acoustics leads to confusion, in part because of this difference in reference value. The human ear has a large dynamic range in sound reception. The ratio of the sound intensity that causes permanent damage during short exposure to that of the quietest sound that the ear can hear is greater than or equal to 1 trillion (1012). Such large measurement ranges are conveniently expressed in logarithmic scale: the base-10 logarithm of 1012 is 12, which is expressed as a sound pressure level of 120 dB re 20 μPa. Since the human ear is not equally sensitive to all sound frequencies, noise levels at maximum human sensitivity, somewhere between 2 and 4 kHz, are factored more heavily into some measurements using frequency weighting. (See also Stevens' power law.) In electronics, the decibel is often used to express power or amplitude ratios (as for gains) in preference to arithmetic ratios or percentages. One advantage is that the total decibel gain of a series of components (such as amplifiers and attenuators) can be calculated simply by summing the decibel gains of the individual components. Similarly, in telecommunications, decibels denote signal gain or loss from a transmitter to a receiver through some medium (free space, waveguide, coaxial cable, fiber optics, etc.) using a link budget. The decibel unit can also be combined with a reference level, often indicated via a suffix, to create an absolute unit of electric power. For example, it can be combined with "m" for "milliwatt" to produce the "dBm". A power level of 0 dBm corresponds to one milliwatt, and 1 dBm is one decibel greater (about 1.259 mW). In professional audio specifications, a popular unit is the dBu. This is relative to the root mean square voltage which delivers 1 mW (0 dBm) into a 600-ohm resistor, or ≈ 0.775 VRMS. When used in a 600-ohm circuit (historically, the standard reference impedance in telephone circuits), dBu and dBm are identical. In an optical link, if a known amount of optical power, in dBm (referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities. In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B. In connection with video and digital image sensors, decibels generally represent ratios of video voltages or digitized light intensities, using 20 log of the ratio, even when the represented intensity (optical power) is directly proportional to the voltage generated by the sensor, not to its square, as in a CCD imager where response voltage is linear in intensity. Thus, a camera signal-to-noise ratio or dynamic range quoted as 40 dB represents a ratio of 100:1 between optical signal intensity and optical-equivalent dark-noise intensity, not a 10,000:1 intensity (power) ratio as 40 dB might suggest. Sometimes the 20 log ratio definition is applied to electron counts or photon counts directly, which are proportional to sensor signal amplitude without the need to consider whether the voltage response to intensity is linear. However, as mentioned above, the 10 log intensity convention prevails more generally in physical optics, including fiber optics, so the terminology can become murky between the conventions of digital photographic technology and physics. Most commonly, quantities called "dynamic range" or "signal-to-noise" (of the camera) would be specified in 20 log dB, but in related contexts (e.g. attenuation, gain, intensifier SNR, or rejection ratio) the term should be interpreted cautiously, as confusion of the two units can result in very large misunderstandings of the value. Photographers typically use an alternative base-2 log unit, the stop, to describe light intensity ratios or dynamic range. Suffixes are commonly attached to the basic dB unit in order to indicate the reference value by which the ratio is calculated. For example, dBm indicates power measurement relative to 1 milliwatt. In cases where the unit value of the reference is stated, the decibel value is known as "absolute". If the unit value of the reference is not explicitly stated, as in the dB gain of an amplifier, then the decibel value is considered relative. The SI does not permit attaching qualifiers to units, whether as suffix or prefix, other than standard SI prefixes. Therefore, even though the decibel is accepted for use alongside SI units, the practice of attaching a suffix to the basic dB unit, forming compound units such as dBm, dBu, dBA, etc., is not. The proper way, according to the IEC 60027-3, is either as "L""x" (re "x"ref) or as "L""x"/"x"ref, where "x" is the quantity symbol and "x"ref is the value of the reference quantity, e.g., "L""E" (re 1 μV/m) = "L""E"/(1 μV/m) for the electric field strength "E" relative to 1 μV/m reference value. Outside of documents adhering to SI units, the practice is very common as illustrated by the following examples. There is no general rule, with various discipline-specific practices. Sometimes the suffix is a unit symbol ("W","K","m"), sometimes it is a transliteration of a unit symbol ("uV" instead of μV for microvolt), sometimes it is an acronym for the unit's name ("sm" for square meter, "m" for milliwatt), other times it is a mnemonic for the type of quantity being calculated ("i" for antenna gain with respect to an isotropic antenna, "λ" for anything normalized by the EM wavelength), or otherwise a general attribute or identifier about the nature of the quantity ("A" for A-weighted sound pressure level). The suffix is often connected with a hyphen, as in "dBHz", or with a space, as in "dB HL", or with no intervening character, as in "dBm", or enclosed in parentheses, as in "dB(sm)". Since the decibel is defined with respect to power, not amplitude, conversions of voltage ratios to decibels must square the amplitude, or use the factor of 20 instead of 10, as discussed above. Probably the most common usage of "decibels" in reference to sound level is dB SPL, sound pressure level referenced to the nominal threshold of human hearing: The measures of pressure (a field quantity) use the factor of 20, and the measures of power (e.g. dB SIL and dB SWL) use the factor of 10. See also dBV and dBu above. Attenuation constants, in fields such as optical fiber communication and radio propagation path loss, are often expressed as a fraction or ratio to distance of transmission. dB/m represents decibel per meter, dB/mi represents decibel per mile, for example. These quantities are to be manipulated obeying the rules of dimensional analysis, e.g., a 100-meter run with a 3.5 dB/km fiber yields a loss of 0.35 dB = 3.5 dB/km × 0.1 km.
https://en.wikipedia.org/wiki?curid=8410
Darwinism Darwinism is a theory of biological evolution developed by the English naturalist Charles Darwin (1809–1882) and others, stating that all species of organisms arise and develop through the natural selection of small, inherited variations that increase the individual's ability to compete, survive, and reproduce. Also called Darwinian theory, it originally included the broad concepts of transmutation of species or of evolution which gained general scientific acceptance after Darwin published "On the Origin of Species" in 1859, including concepts which predated Darwin's theories. English biologist Thomas Henry Huxley coined the term "Darwinism" in April 1860. Darwinism subsequently referred to the specific concepts of natural selection, the Weismann barrier, or the central dogma of molecular biology. Though the term usually refers strictly to biological evolution, creationists have appropriated it to refer to the origin of life. It is therefore considered the belief and acceptance of Darwin's and of his predecessors' work, in place of other concepts, including divine design and extraterrestrial origins. English biologist Thomas Henry Huxley coined the term "Darwinism" in April 1860. It was used to describe evolutionary concepts in general, including earlier concepts published by English philosopher Herbert Spencer. Many of the proponents of Darwinism at that time, including Huxley, had reservations about the significance of natural selection, and Darwin himself gave credence to what was later called Lamarckism. The strict neo-Darwinism of German evolutionary biologist August Weismann gained few supporters in the late 19th century. During the approximate period of the 1880s to about 1920, sometimes called "the eclipse of Darwinism", scientists proposed various alternative evolutionary mechanisms which eventually proved untenable. The development of the modern synthesis in the early 20th century, incorporating natural selection with population genetics and Mendelian genetics, revived Darwinism in an updated form. While the term "Darwinism" has remained in use amongst the public when referring to modern evolutionary theory, it has increasingly been argued by science writers such as Olivia Judson and Eugenie Scott that it is an inappropriate term for modern evolutionary theory. For example, Darwin was unfamiliar with the work of the Moravian scientist and Augustinian friar Gregor Mendel, and as a result had only a vague and inaccurate understanding of heredity. He naturally had no inkling of later theoretical developments and, like Mendel himself, knew nothing of genetic drift, for example. In the United States, creationists often use the term "Darwinism" as a pejorative term in reference to beliefs such as scientific materialism, but in the United Kingdom the term has no negative connotations, being freely used as a shorthand for the body of theory dealing with evolution, and in particular, with evolution by natural selection. While the term "Darwinism" had been used previously to refer to the work of Erasmus Darwin in the late 18th century, the term as understood today was introduced when Charles Darwin's 1859 book "On the Origin of Species" was reviewed by Thomas Henry Huxley in the April 1860 issue of the "Westminster Review". Having hailed the book as "a veritable Whitworth gun in the armoury of liberalism" promoting scientific naturalism over theology, and praising the usefulness of Darwin's ideas while expressing professional reservations about Darwin's gradualism and doubting if it could be proved that natural selection could form new species, Huxley compared Darwin's achievement to that of Nicolaus Copernicus in explaining planetary motion: These are the basic tenets of evolution by natural selection as defined by Darwin: Another important evolutionary theorist of the same period was the Russian geographer and prominent anarchist Peter Kropotkin who, in his book "" (1902), advocated a conception of Darwinism counter to that of Huxley. His conception was centred around what he saw as the widespread use of co-operation as a survival mechanism in human societies and animals. He used biological and sociological arguments in an attempt to show that the main factor in facilitating evolution is cooperation between individuals in free-associated societies and groups. This was in order to counteract the conception of fierce competition as the core of evolution, which provided a rationalization for the dominant political, economic and social theories of the time; and the prevalent interpretations of Darwinism, such as those by Huxley, who is targeted as an opponent by Kropotkin. Kropotkin's conception of Darwinism could be summed up by the following quote: "Darwinism" soon came to stand for an entire range of evolutionary (and often revolutionary) philosophies about both biology and society. One of the more prominent approaches, summed in the 1864 phrase "survival of the fittest" by Herbert Spencer, later became emblematic of Darwinism even though Spencer's own understanding of evolution (as expressed in 1857) was more similar to that of Jean-Baptiste Lamarck than to that of Darwin, and predated the publication of Darwin's theory in 1859. What is now called "Social Darwinism" was, in its day, synonymous with "Darwinism"—the application of Darwinian principles of "struggle" to society, usually in support of anti-philanthropic political agenda. Another interpretation, one notably favoured by Darwin's half-cousin Francis Galton, was that "Darwinism" implied that because natural selection was apparently no longer working on "civilized" people, it was possible for "inferior" strains of people (who would normally be filtered out of the gene pool) to overwhelm the "superior" strains, and voluntary corrective measures would be desirable—the foundation of eugenics. In Darwin's day there was no rigid definition of the term "Darwinism", and it was used by opponents and proponents of Darwin's biological theory alike to mean whatever they wanted it to in a larger context. The ideas had international influence, and Ernst Haeckel developed what was known as "Darwinismus" in Germany, although, like Spencer's "evolution", Haeckel's "Darwinism" had only a rough resemblance to the theory of Charles Darwin, and was not centered on natural selection. In 1886, Alfred Russel Wallace went on a lecture tour across the United States, starting in New York and going via Boston, Washington, Kansas, Iowa and Nebraska to California, lecturing on what he called "Darwinism" without any problems. In his book "Darwinism" (1889), Wallace had used the term "pure-Darwinism" which proposed a "greater efficacy" for natural selection. George Romanes dubbed this view as "Wallaceism", noting that in contrast to Darwin, this position was advocating a "pure theory of natural selection to the exclusion of any supplementary theory." Taking influence from Darwin, Romanes was a proponent of both natural selection and the inheritance of acquired characteristics. The latter was denied by Wallace who was a strict selectionist. Romanes' definition of Darwinism conformed directly with Darwin's views and was contrasted with Wallace's definition of the term. The term "Darwinism" is often used in the United States by promoters of creationism, notably by leading members of the intelligent design movement, as an epithet to attack evolution as though it were an ideology (an "ism") of philosophical naturalism, or atheism. For example, in 1993, UC Berkeley law professor and author Phillip E. Johnson made this accusation of atheism with reference to Charles Hodge's 1874 book "What Is Darwinism?". However, unlike Johnson, Hodge confined the term to exclude those like American botanist Asa Gray who combined Christian faith with support for Darwin's natural selection theory, before answering the question posed in the book's title by concluding: "It is Atheism." Darwinism is an attempt to explain "design without a designer", according to evolutionary biologist Francisco J. Ayala. Creationists use pejoratively the term "Darwinism" to imply that the theory has been held as true only by Darwin and a core group of his followers, whom they cast as dogmatic and inflexible in their belief. In the 2008 documentary film "", which promotes intelligent design (ID), American writer and actor Ben Stein refers to scientists as Darwinists. Reviewing the film for "Scientific American", John Rennie says "The term is a curious throwback, because in modern biology almost no one relies solely on Darwin's original ideas... Yet the choice of terminology isn't random: Ben Stein wants you to stop thinking of evolution as an actual science supported by verifiable facts and logical arguments and to start thinking of it as a dogmatic, atheistic ideology akin to Marxism." However, "Darwinism" is also used neutrally within the scientific community to distinguish the modern evolutionary synthesis, which is sometimes called "neo-Darwinism", from those first proposed by Darwin. "Darwinism" also is used neutrally by historians to differentiate his theory from other evolutionary theories current around the same period. For example, "Darwinism" may refer to Darwin's proposed mechanism of natural selection, in comparison to more recent mechanisms such as genetic drift and gene flow. It may also refer specifically to the role of Charles Darwin as opposed to others in the history of evolutionary thought—particularly contrasting Darwin's results with those of earlier theories such as Lamarckism or later ones such as the modern evolutionary synthesis. In political discussions in the United States, the term is mostly used by its enemies. "It's a rhetorical device to make evolution seem like a kind of faith, like 'Maoism,'" says Harvard University biologist E. O. Wilson. He adds, "Scientists don't call it 'Darwinism'." In the United Kingdom the term often retains its positive sense as a reference to natural selection, and for example British atheist Richard Dawkins wrote in his collection of essays "A Devil's Chaplain", published in 2003, that as a scientist he is a Darwinist. In his 1995 book "Darwinian Fairytales", Australian philosopher David Stove used the term "Darwinism" in a different sense than the above examples. Describing himself as non-religious and as accepting the concept of natural selection as a well-established fact, Stove nonetheless attacked what he described as flawed concepts proposed by some "Ultra-Darwinists." Stove alleged that by using weak or false "ad hoc" reasoning, these Ultra-Darwinists used evolutionary concepts to offer explanations that were not valid: for example, Stove suggested that the sociobiological explanation of altruism as an evolutionary feature was presented in such a way that the argument was effectively immune to any criticism. English philosopher Simon Blackburn wrote a rejoinder to Stove, though a subsequent essay by Stove's protegé James Franklin suggested that Blackburn's response actually "confirms Stove's central thesis that Darwinism can 'explain' anything." In evolutionary aesthetics theory, there is evidence that perceptions of beauty are determined by natural selection and therefore Darwinian; that things, aspects of people and landscapes considered beautiful are typically found in situations likely to give enhanced survival of the perceiving human's genes.
https://en.wikipedia.org/wiki?curid=8411
Doraemon Doraemon ( ) is a Japanese manga series written and illustrated by Fujiko Fujio, the pen name of the duo Hiroshi Fujimoto and Motoo Abiko. The series has also been adapted into a successful anime series and media franchise. The story revolves around an earless robotic cat named Doraemon, who travels back in time from the 22nd century to aid a boy named . The first full story in the Doraemon manga series was published in January 1970. A pre-advertisement for the manga was published in six different magazines in December 1969. A total of 1,465 stories were created in the original series, which are published by Shogakukan. It is one of the best-selling manga in the world, and has sold over 100 million copies . The volumes are collected in the Takaoka Central Library in Toyama, Japan, where Fujiko Fujio was born. Turner Broadcasting System bought the rights to the Doraemon anime series in the mid-1980s for an English-language release in the United States, but cancelled it without explanation before broadcasting any episodes. In July 2013, Voyager Japan announced the manga would be released digitally in English via the Amazon Kindle e-book service. Awards for Doraemon include the Japan Cartoonists Association Award for excellence in 1973, the first Shogakukan Manga Award for children's manga in 1982, and the first Osamu Tezuka Culture Award in 1997. In March 2008, Japan's Foreign Ministry appointed Doraemon as the nation's first "anime ambassador." A Ministry spokesperson explained the novel decision as an attempt to help people in other countries understand Japanese anime better and to deepen their interest in Japanese culture. The Foreign Ministry action confirms that Doraemon has come to be considered a Japanese cultural icon. In India, its Hindi, Telugu and Tamil translation has been telecasted, where the anime version is the highest-rated kids' show; winning the "Best Show For Kids" award twice at the Nickelodeon Kids' Choice Awards India in 2013 and 2015. In 2002 "Time Asia" magazine acclaimed the character as an "Asian Hero" in a special feature survey. An edited English dub distributed by TV Asahi aired on Disney XD in the United States started on July 7, 2014. On August 17, 2015, another English dubbed version distributed by Luk Internacional began broadcasting on Boomerang UK. The film series is the largest by number of admissions in Japan. Doraemon, a cat robot from the 22nd century, is sent to help Nobita Nobi, a young boy, who scores poor grades and is frequently bullied by his two classmates, Gian and Suneo. So that his descendants can improve their lives, Doraemon is sent to take care of Nobita by Sewashi Nobi, Nobita's future grandson. Doraemon has a four-dimensional pouch in which he stores unexpected gadgets that improve his life. He has many gadgets, which he gets from The Future Departmental Store, such as Bamboo-Copter, a small piece of headgear that can allow its users to fly; Anywhere Door, a pink-colored door that allows people to travel according to the thoughts of the person who turns the knob; Time Kerchief, a handkerchief that can turn an object new or old or a person young or old; Translator Tool, a cuboid jelly that can allow people to converse in any language across the universe; Designer Camera, a camera that produces dresses; and many more. Nobita's closest friend and love interest is Shizuka Minamoto, who eventually becomes his wife in the future and has a child with him named Nobisuke Nobi (the same name as Nobita's father). Nobita is often bullied by Takeshi Goda (nicknamed "Gian"), and Suneo Honekawa (Gian's sidekick), but they are shown to be friends in some of the episodes. In most episodes, a typical story consists of Nobita taking a gadget from Doraemon for his needs eventually causing more trouble than he was trying to solve. In December 1969 the "Doraemon" manga appeared in six different children's monthly magazines published by Shogakukan. The magazines were aimed at children from nursery school to fourth grade. In 1977 "CoroCoro Comic" was launched as the flagship magazine of "Doraemon." Since the debut of "Doraemon" in 1969, the stories have been selectively collected into forty-five tankōbon volumes, which were published under Shogakukan's "Tentōmushi Comics" imprint, from 1974 to 1996. Shogakukan published a "master works" collection consisting of twenty volumes between July 24, 2009 and September 25, 2012. In addition, Doraemon has appeared in a variety of manga series by Shōgakukan. In 2005 Shōgakukan published a series of five more manga volumes under the title "Doraemon+" ("Doraemon Plus"), which were not found in the forty-five original volumes. On December 1, 2014, a sixth volume of "Doraemon Plus" was published. This was the first volume in eight years. There have been two series of bilingual, Japanese and English, volumes of the manga by SHOGAKUKAN ENGLISH COMICS under the title "Doraemon: Gadget Cat from the Future", and two audio versions. The first series has ten volumes and the second six. In addition, 21st Century Publishing House (二十一世纪出版社集团) released bilingual English-Chinese versions in Mainland China. In July 2013, Fujiko Fujio Productions announced that they would be collaborating with ebook publisher Voyager Japan and localization company AltJapan Co., Ltd. to release an English-language version of the manga in full color digitally via the Amazon Kindle platform in North America. Shogakukan released the first volume in November 2013. This English version incorporates a variety of changes to character names; Nobita is "Noby", Shizuka is "Sue", Suneo is "Sneech", and Gian is "Big G", while dorayaki is "Yummy Bun/Fudgy Pudgy Pie." A total of 200 volumes have been released. The manga has been published in English in print by Shogakukan Asia, using the same translation as the manga available on Amazon Kindle. Unlike the Amazon Kindle releases these volumes are in black and white instead of color. They have released four volumes. Shogakukan started digital distribution of all forty-five original volumes throughout Japan from July 16, 2015. After a brief first attempt at an animated series in 1973 by Nippon Television, "Doraemon" remained fairly exclusive in manga form until 1979 when a newly formed animation studio, Shin-Ei Animation (now owned by TV Asahi) produced an animated second attempt of "Doraemon." This series became incredibly popular, and ended with 1,787 episodes on March 25, 2005. In Asia, this version is sometimes referred to as the Ōyama Edition, after the voice actress who voiced Doraemon in this series. Celebrating the anniversary of the franchise, a third "Doraemon" animated series began airing on TV Asahi on April 15, 2005, with new voice actors and staff, and updated character designs. This version is sometimes referred to in Asia as the Mizuta Edition, as Wasabi Mizuta is the voice actress for Doraemon in this series. On May 12, 2014, TV Asahi Corporation announced an agreement with The Walt Disney Company to beginning in the summer of that year. Besides using the name changes that were used in AltJapan's English adaptation of the original manga, other changes and edits have also been made to make the show more relatable to an American audience, such as Japanese text being replaced with English text on certain objects like signs and graded papers, items such as yen notes being replaced by US dollar bills, and the setting being changed from Japan to the United States. Confirmed cast member of the new American adaptation include veteran anime voice actress Mona Marshall of "South Park" fame in the title role of Doraemon and Johnny Yong Bosch of "Power Rangers" and "Bleach" fame as Noby. The English dub is produced by Bang Zoom! Entertainment. Initial response to the edited dub was positive. The Disney adaptation began broadcast in Japan on Disney Channel from February 1, 2016. The broadcast offered the choice of the English voice track or a newly recorded Japanese track by the US cast. In EMEA regions, the series is licensed by LUK International. The series began broadcast in the United Kingdom on August 17, 2015, on Boomerang. In 1980, Toho released the first of a series of annual feature-length animated films based on the lengthy special volumes published annually. Unlike the anime and manga (some based on the stories in select volumes), they are more action-adventure oriented and have more of a shōnen demographic, taking the familiar characters of "Doraemon" and placing them in a variety of exotic and perilous settings. Nobita and his friends have visited the age of the dinosaurs, the far reaches of the galaxy, the depths of the ocean, and a world of magic. Some of the films are based on legends such as Atlantis, and on literary works including "Journey to the West" and "Arabian Nights." Some films also have serious themes, especially on environmental topics and the use of technology. Overall, the films have a somewhat darker tone in their stories, unlike the manga and anime. There are 63 Japanese-only Doraemon video games, ranging from platformer games to RPG games, beginning with the Emerson's Arcadia 2001 system. Doraemon can also be seen in Namco's popular "Taiko no Tatsujin" rhythm game series like "Taiko no Tatsujin" (11 – 14 only), "", "Taiko no Tatsujin Wii", "Taiko no Tatsujin Plus", and "". The Japanese version of Microsoft's "3D Movie Maker" contained a Doraemon-themed expansion pack. The first Doraemon game to receive a Western release was "Doraemon Story of Seasons" (2019). was a 2008 musical based on the 1990 anime film . It debuted at Tokyo Metropolitan Art Space on September 4, 2008, running through September 14. Wasabi Mizuta voiced Doraemon. The "Doraemon " franchise has had numerous licensed merchandise. In 1999, "Doraemon" licensed merchandise sold in Japan, where it was the fifth highest-grossing franchise annually. "Doraemon" licensed merchandise in Japan later sold in 2000, in 2001, in 2003, during 20042008, and during 20102012, adding up to at least () licensed merchandise sales in Japan by 2012. Global retail sales of "Doraemon" licensed merchandise later generated in 2015, and in 2016. , "Doraemon" has generated at least in licensed merchandise sales. Until 2015, more than 100million tankobon copies of the manga have been sold, and the anime series is available in over 30 countries. The "Doraemon" film series sold more than 103million tickets at the Japanese box office by 2015, surpassing "Godzilla" as the highest-grossing film franchise in Japan, and the films grossed over at the worldwide box office, making "Doraemon" the highest-grossing anime film franchise. Doraemon was awarded the first Shogakukan Manga Award for children's manga in 1982. In 1997, it was awarded the first Osamu Tezuka Culture Award. In 2008, the Japanese Ministry of Foreign Affairs appointed Doraemon as the first anime cultural ambassador. On 22 April 2002, on the special issue of "Asian Hero" in "Time" magazine, Doraemon was selected as one of the 22 Asian Heroes. Being the only anime character selected, Doraemon was described as "The Cuddliest Hero in Asia". In 2005, the Taiwan Society of New York selected "Doraemon" as a culturally significant work of Japanese otaku pop-culture in its exhibit "Little Boy: The Arts of Japan's Exploding Subculture", curated by renowned artist Takashi Murakami. Jason Thompson praised the "silly situations" and "old fashioned, simple artwork", with Doraemon's expression and comments adding to the "surrounding elementary-school mischief". On September 3, 2012, Doraemon was granted official residence in the city of Kawasaki, one hundred years before he was born. With the 2013 film, "", Doraemon has surpassed Godzilla in terms of overall ticket sales for a film franchise as Toho's most lucrative movie property. The 33-year series (1980–2013) has sold a combined 100 million tickets vs. the 50-year Godzilla series (1954–2004), which sold a combined 99 million tickets. It also became the largest franchise by numbers of admissions in Japan. The "Doraemon" anime series is India's highest-rated children's television show , with a total of 478.5million viewers across Hungama TV and Disney Channel India. "Doraemon" is similarly popular in neighbouring Pakistan, where the Hindi-dubbed version is aired (Hindi and Urdu are mutually intelligible). Its popularity has led to controversy in both countries. In 2016, politicians and conservative activists in both India and Pakistan campaigned to ban the show from television because they claimed it "corrupts children." In India, legal notices were served against several companies in India, targeting "Doraemon" and "Crayon Shin-chan", as having an adverse effect on children. The Government of Bangladesh banned the Indian feeds of Disney Channel and Disney XD in February 2013 as the show "Doraemon" was being broadcast continuously throughout the day in Hindi In Pakistan, "Doraemon" was targeted by the political party Pakistan Tehreek-e-Insaf as having a negative impact on children, because of Nobita's constant reliance on Doraemon's gadgets to solve problems, and they attempted to ban 24 hour cartoon channels in general, because of their supposed ruining of children's minds. They also attempted to ban the Hindi dub of the series, as Pakistan's official language is Urdu. A Fujiko F. Fujio museum opened in Kawasaki on September 3, 2011, featuring Doraemon as the star of the museum. As one of the oldest, continuously running icons, Doraemon is a recognizable character in this contemporary generation. Nobita, the show's protagonist, is a break from other characters typically portrayed as special or extraordinary, and this portrayal has been seen as reasons of its appeal as well as the contrary, especially in the United States. Mexican filmmaker Guillermo del Toro considers "Doraemon" to be "the greatest kids series ever created". ESP Guitars have made several Doraemon guitars aimed at children. In late 2011, Shogakukan and Toyota joined forces to create a series of live-action commercials as part of Toyota's ReBorn ad campaign. The commercials depict the characters nearly 20 years older. Hollywood actor Jean Reno plays Doraemon. Doraemon has become a prevalent part of popular culture in Japan. Newspapers also regularly make references to Doraemon and his pocket as something with the ability to satisfy all wishes. The series is frequently referenced in other series such as "Gin Tama" and "Great Teacher Onizuka". Doraemon appears in appeals for charity. TV Asahi launched the "Doraemon Fund" charity fund to raise money for natural disasters. Doraemon, Nobita, and the other characters also appear in various educational manga. Doraemon appeared in the 2016 Summer Olympics closing ceremony to promote the 2020 Summer Olympics in Tokyo.
https://en.wikipedia.org/wiki?curid=8412
Dartmoor Preservation Association Dartmoor Preservation Association (DPA) is one of the oldest environmental or amenity bodies in the UK. It was founded in 1883. It concerns itself with Dartmoor, a National Park in Devon, south-west England. It began with two main areas of concern. Firstly, commoners' rights were being eroded through army use, including the firing of live artillery shells, and piecemeal enclosure of land around the margins. Secondly, there was increasing public interest in Dartmoor's scenery, archaeology, history and wildlife The DPA has opposed what it considered to be unsuitable developments on Dartmoor throughout its history. In its founding year, the secretary, Robert Burnard persuaded the War Department not to fire on the Okehampton Firing Range on Saturdays to allow access to the public. Many battles have been fought since, particularly against the military presence and the proposed building of reservoirs on the moor, notably under the Chairmanship of Lady Sayer, granddaughter of Robert Burnard. The DPA continues to follow the same objectives as when it was founded. For example, in June 2015, it supported the inhabitants of Widecombe-in-the-Moor against the erecting of a telecommunications mast in an area of pristine countryside against the wishes of the local population. Dartmoor Preservation Association is a registered charity, Number 215665. Dartmoor is said to be one of the last remaining areas of wilderness in Britain, but it has been a managed landscape since the late Neolithic (3,000-2,500 BCE). The Bronze Age inhabitants (from 2,500 to 750 BCE) cleared ancient forest and developed farming. They made extensive use of surface moorstone in the construction of roundhouses (their remains now seen as "hut circles"), enclosures, land-dividing reaves, stone rows, stone circles, menhirs and kistvaens. Farming has continued through the Medieval period to the present day, but a more disruptive activity to the landscape was the appearance of tin-mining, firstly by stream-working, then by lode-working and finally by underground mining. Many valleys have been dug over and scarred, leaving a rich industrial archaeology. Other activities such as newtake wall building, peat cutting, rabbit warrening, quarrying, clay extraction and the building of a prominent prison have all left marks on the moor. Recent undertakings have left more obvious changes: the building of reservoirs and the planting of conifer forests. The use of moorstone continued up to recent times with the extensive building of dry stone walls around farm newtakes. Later, stone was cut and dressed. The use of moorstone continued to such an extent that in 1847 boundary markers were cut around Pew Tor to protect it. Marker stones were erected around Roos Tor. The taking of stone started to change the Dartmoor landscape: for example Eric Hemery (writing in 1983) stated that Swell Tor had been "decapitated and disembowelled by the quarrymen". In August 1881, a public meeting was convened by the Portreeve of Tavistock in the Guildhall to discuss the continued taking of stone, particularly from landmark tors. The DPA was founded in 1883. The protected area around Pew Tor was extended in December 1896. In 1901, the DPA commissioned a report into damage to ancient monuments, caused by the taking of stone for building and road-mending, and into unlawful enclosures of common land. The first publication of the DPA, in 1890, was a short history of commoners' rights on Dartmoor and the commons of Devon. This notes a decrease in the numbers of animals even in medieval times: in 1296 – 5,000 cattle, 487 horses, 131 folds of sheep; in 1316 – 3,292 cattle, 368 horses, 100 folds of sheep. "An important battle occurred in 1894 when the Corporation of London attempted to buy the whole of Dartmoor in order to pipe its water to Paddington alongside Brunel’s recently converted railway, when it went from broad gauge to standard gauge. The DPA led the revolt against this". In 1897, the DPA went to court to fight successfully the enclosure of a section of Peter Tavy Great Common, in support of a farmer. Commoners rights seem to have been a settled issue in recent years: except for where they are impinged upon by the military presence. Dartmoor Training Area has been used regularly for military training since 1873, although it was used earlier during the Napoleonic and Crimean Wars. In 1906-07, seven miles of roads were built on the north moor to facilitate the movement of guns. There are three established firing ranges at Okehampton, Willsworthy and Merrivale. The area taken up with live firing ranges is 9,187 hectares (22,664 acres) and they are used on average 120 days each year. They are used for small arms, mortars and artillery smoke and illuminating shells. The use of the moor by the military has been a major concern of the DPA since its founding. In its first year, Robert Burnard (DPA Secretary) persuaded the War Department not to fire on the Okehampton Firing Range on Saturdays so that there may be some public access to the area. Lady Sylvia Sayer was very outspoken about it being totally at odds with the area being designated as a National Park. In 1963 the DPA published a widely circulated 24-page booklet entitled "Misuse of a National Park" which includes photographs of unexploded shells lying on the open moor, corrugated iron buildings, large craters, a derelict tank used as a target, bullet marks on standing stones, etc. It also contains details of a 1958 incident in which a young boy was killed by a mortar shell near Cranmere Pool. Since the 1960s there has been much less military damage and litter as a result of the DPA persuading the Services to be more cautious. The military have changed since the Victorian era, they now have 120 conservation groups across the Ministry of Defence (MOD), including Dartmoor Military Conservation Group. The current leases run for many years, with Cramber Tor most recently being granted a further 40-year licence. Early afforestation occurred when Brimpts was planted with trees in 1862. The Forestry Commission was founded in 1919, following World War I and in that year the Duchy of Cornwall planted 800 acres of conifers at Fernworthy. In 1921, Plymouth Corporation planted conifers around Burrator Reservoir. The Forestry Commission planted Bellever and Laughter Tor farms in 1930-32 and in 1944-1945 Soussons Down was also planted. The DPA opposed these post-war plantings and R. Hansford Worth (1868-1950, a Plymouth engineer, scientist and antiquarian) delivered a lecture fiercely critical of the Duchy of Cornwall as the landowners at The Plymouth Athenaeum, using the argument of encroachment on the rights of common and loss of ancient monuments. DPA opposition to forestry on Dartmoor arose again in 1953 when it wrote a policy on woodlands in the then-new national park. Opposition was exercised when Hawn, Dendles and High House Wastes, all near Cornwood, were designated for tree planting in 1959. Argument continued while Hawns and Dendles Wastes were ploughed in 1960. High House Waste was purchased by the DPA in 1964 and the Nature Conservancy (UK) bought neighbouring Dendles in 1965. The situation in 2015 is that some of the Dartmoor plantations have been affected by the fungal disease Phytophthora ramorum which results in widespread clear felling to prevent further spread of the disease. The policy now is to replant with more native hardwood trees although more resistant conifers are also being used. There are eight Dartmoor reservoirs, with the earliest being Tottiford Reservoir, 1861. Three were built in the mid-20th century: Fernworthy, 1942; Avon, 1957 and Meldon, 1972, and the DPA fought many battles over these. It opposed plans for reservoirs on Brent Moor (1899) and Holne Moor (1901) where, later, the Avon Reservoir and Venford Reservoirs were respectively built. The DPA's opposition was supported in the House of Commons with argument made regarding the effects on the local water table. The DPA was one of many local and national amenity bodies that fought the building of the Meldon dam. The preservation battle for the Meldon valley was recorded in a DPA publication. The DPA offered a viable alternative site, Gorhuish Valley, for various reasons, including the fact that minerals such as arsenic would leach into the water supply if Meldon were selected. The Meldon story was discussed many times in Parliament. Another battle was fought against the flooding of the Swincombe valley to form another reservoir. This was rejected in parliament in 1970, revived in 1974 and finally resolved by the building of the Roadford Reservoir to the west of the moor. In 1985 the DPA used funds from a bequest to purchase 50 acres of land where the dam of a reservoir at Swincombe would have to be. The National Parks and Access to the Countryside Act 1949 led to Dartmoor being one of the first four parks to be designated, by an order made on 15 August 1951 and confirmed on 30 October 1951. Shortly after this, the DPA tried to ensure that the new National Park was run by an independent committee and not by the Dartmoor Standing Committee that was a subcommittee of Devon County Council Planning Committee. The committee was reformed as Dartmoor National Park Committee under the Local Government Act 1972 but it was still a subcommittee of Devon County Council and as such it was not seen to be an independent guardian of the moor by the DPA. It was not until 1997 that an independent Dartmoor National Park Authority was enabled under the Environment Act 1995 as a free-standing local authority, forty-four years after the park was created, although it is still dominated by local authorities and government appointees. The DPA learned in October 1951 that the BBC planned to build a 750-foot television mast on North Hessary Tor, near Princetown, that was erected in 1955. This was to be a relay from a transmitting station at Wenvoe, South Wales. The DPA objected to this threat and sought expert opinion, offered alternative solutions, pressed for a public enquiry, engaged a lawyer, held public meetings, distributed pamphlets, wrote to the press and petitioned parliament. Eventually, a public enquiry was announced. When the decision was made to permit the mast, there were a number of conditions, included among them was that the development was built near the tor, leaving it still intact, and that its new approach road should not be fenced. During the process of obtaining land for the transmitter, one MP asked in the House of Commons: "Will the Assistant Postmaster-General bear in mind that we have no desire to hinder the provision of this station but that it is felt that ancient common rights such as these, that have existed for a thousand years, should be adequately protected or properly extinguished by due process of law?" During World War II, the Royal Air Force (RAF) built a mast and buildings on Peek Hill, as RAF Sharpitor. In 1956, permission was granted to rebuild the station as part of the "Gee" radio navigation system, to be occupied for ten years. There followed delay in leaving and a proposal was made in 1970 by Devon & Cornwall Police to use the mast, which was rejected. Then later that year Plymouth Corporation wanted to use the exposed site for housing juvenile offenders. This was also rejected, but Plymouth appealed. At a public enquiry in June 1973 Lady Sylvia Sayer represented the DPA and permission for development on the site was refused. A few years later, DPA fought successfully in support of South West Water (SWW) against renewed calls for a new reservoir at Swincombe. To mark the victory, Sylvia Sayer asked SWW if DPA could purchase the rocky outcrop of Sharpitor. The DPA purchased 32 acres in February 1984. Okehampton lies on the A30 main road, the shortest route from London to west Devon and Cornwall. The need for a bypass was mooted in 1963. In 1975, three routes were considered: a northern route through mainly farmland, a central route using a railway, and a southern route through Dartmoor National Park. In August 1976, the Department of the Environment announced the preferred route was through the National Park. A major event on the timeline of this project was a 96-day public enquiry from 1 May 1979 to 4 February 1980 held in Okehampton. In March 1984, the DPA with other organisations petitioned Parliament opposing compulsory purchase orders on public open spaces. The Secretary of State announced in July 1985 that he was introducing a bill to reverse the decision of a Joint Parliamentary Committee and confirm a route through the National Park. This was followed by a confirmation bill in November 1985 that was passed in the House of Lords on 5 December 1985. Construction started in November 1986 and the road was opened on 19 July 1988. The DPA continues to follow the same objectives as when it was founded. The activities have widened, involving local partners, it has a calendar of events, walks and work days with its Conservation Team undertaking a variety of moorland projects, it funds the supply of walking boots to some children who need them for the Duke of Edinburgh Award Scheme through the Moor Boots Scheme, it collaborates with the Campaign for National Parks, it monitors the activities of Dartmoor National Park Authority who run the National Park. It objected to eight planning proposals (with success in seven cases), with many other achievements in the DPA Director's Annual Report. The DPA remains true to its original objectives and has also added other activities in support of Dartmoor and its inhabitants. The china clay industry on Dartmoor was established long before the DPA was founded. The earliest record of a china clay pit refers to Hook Lake in 1502. The area was surveyed around 1827 by Cornishmen with thirty years experience in the clay industry. They obtained a 21-year lease in 1830, from the Earl of Morley who owned the land, to work the area between Lee Moor and Shaugh Moor. A rival pit was opened at Leftlake in about 1850 and at Hemerdon and Broomage in about 1855. Further pits were opened at Cholwichtown, Whitehill Yeo and Wigford Down/Brisworthy (circa 1860). Others followed at Smallhanger and Headon in the 1870s. Redlake started working in 1910. China clay pits are open cast mines that result in large holes in the ground accompanied by large waste tips. Over time, the pits become larger and more ground is needed for the waste, changing the landscape: the effect of this can be seen from space. The DPA argues that this is an activity that does not agree with the ethos of a National Park, whose purpose is to protect landscape from unsuitable development. In 1994, the National Park boundaries were changed to include common land at Shaugh Moor and exclude china clay worked land at Lee Moor. The DPA revived its campaign with the publication of a booklet in 1999 when the Blackabrook Valley, Crownhill Down and Shaugh Moor, near the popular tourist area of Cadover Bridge, all came under threat from exploitation or dumping of waste. The china clay companies relinquished planning permissions in 2001. However, in November 2009, the clay companies, Sibelco and Imerys, produced a report reviewing old mineral permissions under the Environment Act 1995 with a view to joining up two pits. A presumed Bronze Age barrow, known as Emmets Post, was to be removed and three other monuments may be affected. The DPA were recorded twice, with other bodies, in a Devon County Council Development Management Committee Report for their representations in securing the future of the three areas where planning permissions were relinquished in 2001. Oxford Archaeology held an open day during their excavation of Emmets Post in 2014 prior to its removal. The DPA and Exmoor Society held a joint reception at the House of Lords on 6 November 2008, hosted by Baroness Mallalieu, to lobby members of both Houses of Parliament and relevant Ministers about ensuring that environmental schemes for the uplands are "fit for purpose". Both organisations funded an invited number of upland hill farmers to attend. The excavation in August 2011 on the north moor of a Bronze Age burial kistvaen, or cist, that was originally uncovered in 2001 was part-funded by the DPA, along with other bodies. A conference for the upland farmers of Bodmin Moor, Exmoor and Dartmoor was held as a joint venture between the South West Uplands Federation and the DPA. It was run by the DPA at Exeter Racecourse in October 2012, with 150 delegates. Speakers came from the Foundation for Common Land, the Forest of Dartmoor Commoners, the University of Gloucestershire, the National Farmers Union of England and Wales and the Open Spaces Society. The CEO raised sponsorship from Dartmoor National Park, Exmoor National Park, Natural England, Duchy of Cornwall and the Exmoor Society - this reflecting the standing of the DPA with those bodies. Two major projects to underground overhead power cables in Dartmoor National Park have been completed in a joint project between Western Power Distribution, the South West Protected Landscapes Forum (SWPLF) and Dartmoor National Park Authority. The two schemes on Holne Moor and Walkhampton Common between them remove nearly 6 km of overhead line from open moorland. At nearly 5 km, the Walkhampton scheme is the largest to be undertaken in the South West region by Western Power Distribution. The old overhead line was readily visible from the B3212 Princetown to Yelverton Road, strung across Walkhampton Common from Devil's Elbow to just above Horseyeatt at Peek Hill. The works to provide the new underground supply were mainly undertaken on the highway to minimise the impact on the sensitive moorland landscape, its archaeology, wildlife and livestock. The DPA has supported the undergrounding of these visually intrusive power lines for many years. The Dartmoor Conservation Garden is a joint project between DPA and Dartmoor National Park Authority (DNPA) and is located in the Jack Wigmore Garden behind the High Moorland Centre in Princetown: this is a memorial garden to a former Chair of the Authority. It is planted with a cross-section of typical native Dartmoor plants. It also houses some typical Dartmoor archaeological features, such as a 4,000-year-old Bronze Age burial kistvaen (or cist) and a Medieval granite cross from Ter Hill. This marked the Monk's Path but was constantly being pushed over by cattle. The purpose of the Garden is to illustrate the biodiversity on Dartmoor. The project came online in June 2015. The DPA were involved in a campaign in June 2015 against four telecommunications masts planned for Dartmoor, with the first to be erected in the village of Widecombe. At short notice, the DPA banners were taken out, letters written, press interviews given and support given to the villagers when an inflatable mast was demonstrated – with the effect that the planning application was withdrawn. In common with other amenity bodies, such as those for the Lake District, Peak District, Pembrokeshire Coast, Yorkshire Dales Three Peaks and the New Forest Trust, the image of Dartmoor Preservation Association is evolving from its Victorian origins, although the original name is being retained. Friends of Dartmoor projects a more modern image of preservation where several years of diplomacy have achieved good relations with the partner agencies that operate in the Dartmoor arena. This is due mainly to the efforts of the previous CEO, James Paxman and his successor, Phil Hutt. The DPA Constitution, objectives and policies are published on the DPA web site. The objectives enshrined in the constitution are the protection, preservation and enhancement in the public interest of the landscape, antiquities, flora and fauna, natural beauty, cultural heritage and scientific interest of Dartmoor. Also the protection and preservation of public access to and on Dartmoor subject to the ancient rights of commoners. Co-operation with the commoners and any organisation in achieving DPA objectives, also the study of and the recording and publication of information upon the antiquities, history and natural history of Dartmoor. There is also an interest in the acquisition of land and rights to further DPA objectives, concomitant with being a charity. The DPA has twenty-two policies listed on its web site: regarding access and rights of way, fencing, protecting monuments, diverse habitats, bracken, china clay quarrying, military training and live firing, hill farming and small scale traditional local industries, quarrying, television and telephone masts, wind farms, planning applications, housing developments, woodlands and forestry, ponies, swaling, and recreational activities. The DPA logo incorporates a representation of a Dartmoor rock feature known as Bowerman's Nose. The logo that includes a representation of Nun's Cross appeared on the DPA Dartmoor Newsletter No. 48, October 1966, with a comment that designs based on the initial letters DPA had been exhausted. The simpler logo appeared in November 1969, when Newsletter 52 carried the logo with "DPA" on it. This was replaced in 2004 with the multicoloured logo.
https://en.wikipedia.org/wiki?curid=8414
Dartmouth College Dartmouth College ( ) is a private Ivy League research university in Hanover, New Hampshire, United States. Established in 1769 by Eleazar Wheelock, it is the ninth-oldest institution of higher education in the United States and one of the nine colonial colleges chartered before the American Revolution. Although founded as a school to educate Native Americans in Christian theology and the English way of life, Dartmouth primarily trained Congregationalist ministers throughout its early history before it gradually secularized, emerging at the turn of the 20th century from relative obscurity into national prominence. Following a liberal arts curriculum, the university provides undergraduate instruction in 40 academic departments and interdisciplinary programs including 57 majors in the humanities, social sciences, natural sciences, and engineering, and enables students to design specialized concentrations or engage in dual degree programs. Dartmouth comprises five constituent schools: the original undergraduate college, the Geisel School of Medicine, the Thayer School of Engineering, the Tuck School of Business, and the Guarini School of Graduate and Advanced Studies. The university also has affiliations with the Dartmouth–Hitchcock Medical Center, the Rockefeller Institute for Public Policy, and the Hopkins Center for the Arts. With a student enrollment of about 6,600, Dartmouth is the smallest university in the Ivy League. Undergraduate admissions are highly competitive, with an acceptance rate of 8.8% for the Class of 2024. Situated on a terrace above the Connecticut River, Dartmouth's 269-acre main campus is in the rural Upper Valley region of New England. The university functions on a quarter system, operating year-round on four ten-week academic terms. Dartmouth is known for its undergraduate focus, strong Greek culture, and wide array of enduring campus traditions. Its 34 varsity sports teams compete intercollegiately in the Ivy League conference of the NCAA Division I. Dartmouth is consistently included among the highest-ranked universities in the United States by several institutional rankings,
https://en.wikipedia.org/wiki?curid=8418
Dartmouth, Devon Dartmouth is a town and civil parish in the English county of Devon. It is a tourist destination set on the western bank of the estuary of the River Dart, which is a long narrow tidal ria that runs inland as far as Totnes. It lies within the South Devon Area of Outstanding Natural Beauty and South Hams district, and had a population of 5,512 in 2001, reducing to 5,064 at the 2011 census. There are two electoral wards in the "Dartmouth" area (Townstal & Kingswear). Their combined population at the above census was 6,822. In 1086, the Domesday Book lists "Dunestal" as the only settlement in the area which now makes up the parish of Dartmouth. It was held by Walter of Douai. It paid tax on half a hide, and had two plough teams, two slaves, five villagers and four smallholders. There were six cattle, 40 sheep and 15 goats. At this time Townstal (as the name became) was apparently a purely agricultural settlement, centred around the church. Walter of Douai rebelled against William II, and his lands were confiscated and added to the honour of Marshwood (Dorset), which sublet Townstal and Dartmouth to the FitzStephens. It was probably during the early part of their proprietorship that Dartmouth began to grow as a port, as it was of strategic importance as a deep-water port for sailing vessels. The port was used as the sailing point for the Crusades of 1147 and 1190, and Warfleet Creek, close to Dartmouth Castle is supposed by some to be named for the vast fleets which assembled there. Dartmouth was a home of the Royal Navy from the reign of Edward III and was twice surprised and sacked during the Hundred Years' War, after which the mouth of the estuary was closed every night with a great chain. The narrow mouth of the Dart is protected by two fortified castles, Dartmouth Castle and Kingswear Castle. Originally Dartmouth's only wharf was Bayard's Cove, a relatively small area protected by a fort at the southern end of the town. In 1373 Geoffrey Chaucer visited and among the pilgrims in his Canterbury Tales Notwithstanding Dartmouth's connections with the crown and respectable society, it was a major base for privateering in medieval times. John Hawley or Hauley, a licensed privateer and sometime mayor of Dartmouth is reputed to be a model for Chaucer's "schipman". The earliest street in Dartmouth to be recorded by name (in the 13th century) is Smith Street. Several of the houses on the street are originally late 16th century or early 17th century and probably rebuilt on the site of earlier medieval dwellings. The street name undoubtedly derives from the smiths and shipwrights who built and repaired ships here when the tidal waters reached as far as this point. Smith Street was also the site of the town pillory in medieval times. The first church in the parish was St Clement's, Townstal, which may have existed in some form before the 1190s. It was granted by the FitzStephens to Torre Abbey in about 1198, the Abbey having been founded in 1196, and the present stone-built church was probably started shortly after this. Manorial transactions are first recorded in 1220, when the manor house was at Norton, about half a mile west of Townstal. Names of occupations also started to appear, including taverner, tailor, coggar, korker, goldsmith, glover, skinner and baker. The "Fosse", now Foss Street, a dam across the creek known later as The Mill Pool, was first mentioned in 1243. The flow of water out of the pool through the Mill Gullet powered a tidal mill. The dam was used as an unofficial footpath linking Clifton, to the south, with Hardness, to the north. Before this it was necessary to go westwards to the head of the creek at Ford to travel between the two settlements. The lord of the manor was given the rights to hold a weekly market and an annual fair in 1231. In 1281, a legal case proved that the Lord of Totnes had the right to charge tolls on ships using the river, and this right was bought by Nicholas of Tewkesbury in 1306, who conveyed the town, river and port to the king in 1327, so making Dartmouth a Royal Borough. The king gave the river to the Duchy of Cornwall in 1333, who still own the "fundus" or bed of the river. In 1335 Edward III granted Dartmouth to Joan of Carew, whose husband was Lord of Stoke Fleming, and almost immediately she obediently passed the lordship to Guy de Bryan, one of the king's leading ministers. In 1341, the town was granted a Royal Charter, which allowed for the election of a mayor. The borough was required to provide two ships for forty days per year. After 1390, no more is heard of lordship rights, and the borough became effectively independent of any lord. St Saviour's Church was constructed in 1335 and consecrated in 1372. It contains a pre-Reformation oak rood screen built in 1480 and several monuments including the tomb of John Hawley (d. 1408) and his two wives, covered with a large brass plate effigy of all three. A large medieval ironwork door is decorated with two leopards of the Plantagenets and is possibly the original portal. Although it is dated "1631", this is thought to be the date of a subsequent refurbishment coincidental with major renovations of the church in the 17th century. The gallery of the church is decorated with the heraldic crests of prominent local families and is reputed to be constructed of timbers from ships captured during the defeat of the Spanish Armada, although this has not been categorically substantiated. An engraving of the interior of the church and showing the screen provided the inspiration for Letitia Elizabeth Landon's poetical illustration "Dartmouth Church" in Fisher's Drawing Room scrap Book, 1833. In mediaeval times, land access from the Totnes direction passed the manor at Norton and the parish church at Townstal before falling steeply along what are now Church Road, Mount Boone and Ridge Hill to the river at Hardness. There were steeper routes via Townstal Hill and Clarence Street and also via Brown's Hill. These were all too steep for vehicles, so the only land access was by packhorse. In 1671 there is the first mention of the building of the "New Ground". A previously existing sandbank was built up using ships' ballast, and a quay wall was built around it to provide more mooring space. The area proved too unstable to be built on, and is now the Royal Avenue Gardens. It was originally linked to the corner of the Quay by a bridge, opposite Duke Street. At the other end of The Quay, Spithead extended into the river for a few yards. In 1592 the "Madre de Deus", a Portuguese treasure ship captured by the English in the Azores, docked at Dartmouth Harbour. It attracted all manner of traders, dealers, cutpurses and thieves and by the time Sir Walter Raleigh arrived to reclaim the Crown's share of the loot, a cargo estimated at half a million pounds had been reduced to £140,000. Still, ten freighters were needed to carry the treasure to London. Henry Hudson put into Dartmouth on his return from North America, and was arrested for sailing under a foreign flag. The Pilgrim Fathers put into Dartmouth's Bayard's Cove, en route from Southampton to America. They rested a while before setting off on their journey in the "Mayflower" and the "Speedwell" on 20 August 1620. About 300 miles west of Land's End, upon realising that the "Speedwell" was unseaworthy, it returned to Plymouth. The "Mayflower" departed alone to complete the crossing to Cape Cod. Dartmouth's sister city is Dartmouth, Massachusetts. The town contains many medieval and Elizabethan streetscapes and is a patchwork of narrow lanes and stone stairways. A significant number of the historic buildings are listed. One of the most obvious is the Butterwalk, built 1635 to 1640. Its intricately carved wooden fascia is supported on granite columns. Charles II held court in the Butterwalk whilst sheltering from storms in 1671 in a room which now forms part of Dartmouth Museum. Much of the interior survives from that time. The Royal Castle Hotel was built in 1639 on the then new quay. The building was re-fronted in the 19th century, and as the new frontage is itself listed, it is not possible to see the original which lies beneath. A claimant for the oldest building is a former merchant's house in Higher Street, now a Good Beer Guide listed public house called "the Cherub", built circa 1380. Agincourt House (next to the Lower Ferry) is also 14th century. Dartmouth sent numerous ships to join the English fleet that attacked the Spanish Armada, including the Roebuck, Crescent and Hart. The "Nuestra Señora del Rosario", the Spanish Armada's "payship" commanded by Admiral Pedro de Valdés, was captured along with all its crew by Sir Francis Drake. It was reportedly anchored in the River Dart for more than a year and the crew were used as labourers on the nearby Greenway Estate which was the home of Sir Humphrey Gilbert and his half-brother Sir Walter Raleigh. Greenway was later the home of Dame Agatha Christie. The remains of a fort at Gallants Bower just outside the town are some of the best preserved remains of a Civil War defensive structure. The fort was built by Royalist occupation forces in c. 1643 to the south east of the town, with a similar fort at Mount Ridley on the opposite slopes of what is now Kingswear. The Parliamentarian General Fairfax attacked from the north in 1646, taking the town and forcing the Royalists to surrender, after which Gallants Bower was demolished. Before 1671, what is now the town centre was almost entirely tidal mud flats. The New Road (now Victoria Road) was constructed across the bed of the (silted up) Mill Pool and up the Ford valley after 1823. Spithead was extended in 1864 when the Dartmouth and Torbay Railway arrived in Kingswear and a pontoon was constructed, linked to Spithead by a bridge. The railway directors and others formed the Dartmouth Harbour Commissioners. At this time, all the roads in those parts of Dartmouth which were not land reclamations were very narrow. In 1864-7 Higher Street was widened into Southtown and linked to Lower Street, which was also widened, with the northern part renamed Fairfax Place. Some of the buildings were rebuilt further back with decorative frontages. In 1881 the Harbour Commissioners produced a scheme for an embankment or esplanade from near the Lower Ferry to Hardness, across the remains of The Pool, to provide an attraction for tourists and further mooring space. It was completed in 1885 after much disagreement between the Borough, the Commissioners and the Railway (now the Great Western Railway). A new station was also built at this time. The building of the Embankment left a section of river isolated between Spithead and the New Ground, which is known as The Boatfloat, and is linked to the river by a bridge for small vessels under the road. The coming of steam ships led to Dartmouth being used as a bunkering port, with coal being brought in by ship or train. Coal lumpers were members of gangs, who competed to bunker the ships by racing to be first to a ship. This led to the men living as close as possible to the river, and their tenements became grossly overcrowded, with the families living in slum conditions, with up to 15 families in one house, one family to a room. The Royal National Lifeboat Institution opened the Dart Lifeboat Station at the Sand Quay in 1878, but it was closed in 1896. In all this time only one effective rescue was made by the lifeboat. The area to the north of Ridge Hill was a shallow and muddy bay ("Coombe Mud") with a narrow road running along the shore linking with the Higher Ferry. The mud was a dumping ground for vessels, including a submarine. The reclamation was completed in 1937 by the extension of the Embankment and the reclamation of the mud behind it, which became Coronation Park. In the 1920s, aided by government grants, the council made a start on clearing the slums. This was aided by the decline in the use of coal as a fuel for ships. The slums were demolished, and the inhabitants were rehoused in new houses in the Britannia Avenue area, to the west of the old village or hamlet of Townstal. The process was interrupted by the second world war, but was resumed with the construction of many prefabs, and later more houses. Community facilities were minimal at first, but a central area was reserved for a church, which was used by the Baptists and opened in 1954, together with a speedway track. The latter was later used for housing, but a new community centre was opened nearby, together with a leisure centre, an outdoor swimming pool, and later an indoor pool, and supermarkets. There are also light industrial units. In the latter part of the Second World War the town was a base for American forces and one of the departure points for Utah Beach in the D Day landings. Slipways and harbour improvements were also constructed. Much of the surrounding countryside and notably Slapton Sands was closed to the public while it was used by US troops for practise landings and manoeuvres. Between 1985 and 1990 the Embankment was widened by 6 metres and raised to prevent flooding at spring tides. A tidal lock gate was provided at the Boatfloat bridge, which could be closed at such times. Dart Lifeboat Station was reopened in 2007, the first time that a lifeboat had been stationed in the town since 1896. It has initially been kept in a temporary building in Coronation Park. In 2010, a fire seriously damaged numerous historical properties in Fairfax Place and Higher Street. Several were Tudor and Grade I or Grade II listed buildings. The town was an ancient borough, incorporated by Edward III, known formally as Clifton-Dartmouth-Hardness, and consisting of the three parishes of "St Petrox", "St Saviour" and "Townstal", and incorporating the hamlets of Ford, Old Mill and Norton. It was reformed under the Municipal Corporations Act 1835. The town returned two members of parliament from the 13th century until 1835, after which one MP was elected until the town was disenfranchised in 1868. It remained a municipal borough until 1974, when it was merged into the South Hams district, and became a successor parish of Dartmouth with a town council. Dartmouth Town Council is the lowest of three tiers of local government. It consists of 16 councillors representing the two wards of Clifton and Townstal. At the second tier, Dartmouth forms part of the Dartmouth and Kingswear ward of South Hams District Council, which returns three councillors. At the upper tier of local government Dartmouth and Kingswear Electoral Division elects one member to Devon County Council. The Port of Dartmouth Royal Regatta takes place annually over three days at the end of August. The event sees the traditional regatta boat races along with markets, fun fairs, community games, musical performances, air displays including the Red Arrows and fireworks. A Royal Navy guard ship is often present at the event. Other cultural events include beer festivals in February and July (the latter in Kingswear), a music festival and an art and craft weekend in June, a food festival in October and a Christmas candlelit event. The Flavel Centre incorporates the public library and performance spaces, featuring films, live music and comedy and exhibitions. Bayard's Cove has been used in several television productions, including "The Onedin Line" a popular BBC television drama series that ran from 1971 to 1980. Many of the scenes from the BBC's popular series 'Down to Earth', starring Ricky Tomlinson, were filmed at various locations around the town. Notable tourist attractions include the Dartmouth Royal Naval College, Dartmouth Castle and the Dartmouth Steam Railway which terminates at Kingswear on the opposite bank of the river. Boat cruises to nearby places along the coast (such as Torbay and Start Bay) and up the river (to Totnes, Dittisham and the Greenway Estate) are provided by several companies. The paddlesteamer PS Kingswear Castle returned to the town in 2013. The South West Coast Path National Trail passes through the town, and also through extensive National Trust coastal properties at Little Dartmouth and Brownstone (Kingswear). The Dart Valley Trail starts in Dartmouth, with routes either side of the River Dart as far as Dittisham, and continuing to Totnes via Cornworthy, Tuckenhay and Ashprington. The area has long been well regarded for yachting, and there are extensive marinas at Sandquay, Kingswear and Noss (approximately one mile north of Kingswear). The nearest Met Office weather station is Slapton, about 5 miles south-south west of Dartmouth and a similar distance from the coast. As with the rest of the British Isles and South West England, the area experiences a maritime climate with warm summers and mild winters - this is particularly pronounced due to its position near the coast - extremes range from a record low of just in January 1987 up to a record high of during June 1976. Dartmouth is linked to Kingswear, on the other side of the River Dart, by three ferries. The Higher Ferry and the Lower Ferry are both vehicular ferries. The Passenger Ferry, as its name suggests, carries only passengers, principally to connect with the Dartmouth Steam Railway at Kingswear railway station. The nearest bridge across the Dart is in Totnes, some away by road. The A379 road runs through Dartmouth, linking the town to Slapton and Kingsbridge to the southwest and to Torbay to the east across the Higher Ferry. The A3122 connects Dartmouth to a junction with the A381, and hence to both Totnes and a more direct route to Kingsbridge. Stagecoach South West provides local town bus services and links to Plymouth, Totnes and Exeter, and Kingsbridge. In addition it provides links to the Torbay resorts of Brixham, Paignton and Torquay from Kingswear via the ferry. No railway has ever run to Dartmouth, but the town does have a railway station, opened on 31 March 1890 to replace the original facility on the pontoon, although it is now a restaurant. The railway line to Kingswear was opened in 1864. As a result of shortage of capital, a deviation from the original scheme to run the line from Churston to Greenway with a steamer service to Dartmouth was proposed, but defeated in Parliament. It had been suggested that this could, at a later date, be used as a jumping off point for a bridge to the west bank of the Dart and a line direct to Dartmouth. In 1900, a Light Railway scheme was proposed for a crossing of the Dart near Maypool to join another line from Totnes and then proceed to Kingsbridge and Yealmpton, with a branch to Salcombe. This was also defeated by lack of funds. The railway terminated at a station called "Kingswear for Dartmouth" (now on the Dartmouth Steam Railway) and a ferry took passengers across the river to the station at Dartmouth railway station, which had a dedicated pontoon. British Railways formally closed the line to mainline passenger trains in 1973, but it immediately re-opened as a heritage line and has run as one ever since. The town is home to the Royal Navy's officer training college (Britannia Royal Naval College), where all officers of the Royal Navy and many foreign naval officers are trained. Dartmouth has one secondary school — formerly (Dartmouth Community College) now Dartmouth Academy — an all-through school for those aged 3–16, and two primary schools: (Dartmouth Primary school (now part of Dartmouth Academy) and St John the Baptist R.C. Primary School). Dartmouth Community College and Dartmouth Primary School are part of the Dartmouth Learning Campus; as from September 2007, Dartmouth Community College is part of a federation with Dartmouth Primary School and Nursery, meaning that the two schools share one governing body for pupils aged 1 to 16. Dartmouth also has a pre-school in the centre of town, established for over 40 years and based in the old Victorian school rooms at South Ford Road. It provides care for 2- to 5-year-olds and is run as a charitable organisation. Dartmouth has a Non-League football club Dartmouth A.F.C. who play at Long Cross. Dartmouth also hosts the annual "World Indoor Rally Championship", based on slot car racing in the late summer. At the end of August and early September there is the annual Port of Dartmouth Royal Regatta. Since 1905 Dartmouth has had a greenhouse as part of the Royal Avenue Gardens. In May 2013 this building, used for the previous 10 years by Dartmouth in Bloom, a not-for-profit organisation affiliated with Britain in Bloom, was closed as structurally unsound. There are proposals to restore the greenhouse to its prior Edwardian style.
https://en.wikipedia.org/wiki?curid=8419
Dodo The dodo ("Raphus cucullatus") is an extinct flightless bird that was endemic to the island of Mauritius, east of Madagascar in the Indian Ocean. The dodo's closest genetic relative was the also-extinct Rodrigues solitaire, the two forming the subfamily Raphinae of the family of pigeons and doves. The closest living relative of the dodo is the Nicobar pigeon. A white dodo was once thought to have existed on the nearby island of Réunion, but this is now thought to have been confusion based on the Réunion ibis and paintings of white dodos. Subfossil remains show the dodo was about tall and may have weighed in the wild. The dodo's appearance in life is evidenced only by drawings, paintings, and written accounts from the 17th century. As these vary considerably, and only some of the illustrations are known to have been drawn from live specimens, its exact appearance in life remains unresolved, and little is known about its behaviour. Though the dodo has historically been considered fat and clumsy, it is now thought to have been well-adapted for its ecosystem. It has been depicted with brownish-grey plumage, yellow feet, a tuft of tail feathers, a grey, naked head, and a black, yellow, and green beak. It used gizzard stones to help digest its food, which is thought to have included fruits, and its main habitat is believed to have been the woods in the drier coastal areas of Mauritius. One account states its clutch consisted of a single egg. It is presumed that the dodo became flightless because of the ready availability of abundant food sources and a relative absence of predators on Mauritius. The first recorded mention of the dodo was by Dutch sailors in 1598. In the following years, the bird was hunted by sailors and invasive species, while its habitat was being destroyed. The last widely accepted sighting of a dodo was in 1662. Its extinction was not immediately noticed, and some considered it to be a mythical creature. In the 19th century, research was conducted on a small quantity of remains of four specimens that had been brought to Europe in the early 17th century. Among these is a dried head, the only soft tissue of the dodo that remains today. Since then, a large amount of subfossil material has been collected on Mauritius, mostly from the Mare aux Songes swamp. The extinction of the dodo within less than a century of its discovery called attention to the previously unrecognised problem of human involvement in the disappearance of entire species. The dodo achieved widespread recognition from its role in the story of "Alice's Adventures in Wonderland", and it has since become a fixture in popular culture, often as a symbol of extinction and obsolescence. The dodo was variously declared a small ostrich, a rail, an albatross, or a vulture, by early scientists. In 1842, Danish zoologist Johannes Theodor Reinhardt proposed that dodos were ground pigeons, based on studies of a dodo skull he had discovered in the collection of the Natural History Museum of Denmark. This view was met with ridicule, but was later supported by English naturalists Hugh Edwin Strickland and Alexander Gordon Melville in their 1848 monograph "The Dodo and Its Kindred", which attempted to separate myth from reality. After dissecting the preserved head and foot of the specimen at the Oxford University Museum and comparing it with the few remains then available of the extinct Rodrigues solitaire ("Pezophaps solitaria") they concluded that the two were closely related. Strickland stated that although not identical, these birds shared many distinguishing features of the leg bones, otherwise known only in pigeons. Strickland and Melville established that the dodo was anatomically similar to pigeons in many features. They pointed to the very short keratinous portion of the beak, with its long, slender, naked basal part. Other pigeons also have bare skin around their eyes, almost reaching their beak, as in dodos. The forehead was high in relation to the beak, and the nostril was located low on the middle of the beak and surrounded by skin, a combination of features shared only with pigeons. The legs of the dodo were generally more similar to those of terrestrial pigeons than of other birds, both in their scales and in their skeletal features. Depictions of the large crop hinted at a relationship with pigeons, in which this feature is more developed than in other birds. Pigeons generally have very small clutches, and the dodo is said to have laid a single egg. Like pigeons, the dodo lacked the vomer and septum of the nostrils, and it shared details in the mandible, the zygomatic bone, the palate, and the hallux. The dodo differed from other pigeons mainly in the small size of the wings and the large size of the beak in proportion to the rest of the cranium. Throughout the 19th century, several species were classified as congeneric with the dodo, including the Rodrigues solitaire and the Réunion solitaire, as "Didus solitarius" and "Raphus solitarius", respectively ("Didus" and "Raphus" being names for the dodo genus used by different authors of the time). An atypical 17th-century description of a dodo and bones found on Rodrigues, now known to have belonged to the Rodrigues solitaire, led Abraham Dee Bartlett to name a new species, "Didus nazarenus", in 1852. Based on solitaire remains, it is now a synonym of that species. Crude drawings of the red rail of Mauritius were also misinterpreted as dodo species; "Didus broeckii" and "Didus herberti". For many years the dodo and the Rodrigues solitaire were placed in a family of their own, the Raphidae (formerly Dididae), because their exact relationships with other pigeons were unresolved. Each was also placed in its own monotypic family (Raphidae and Pezophapidae, respectively), as it was thought that they had evolved their similarities independently. Osteological and DNA analysis has since led to the dissolution of the family Raphidae, and the dodo and solitaire are now placed in their own subfamily, Raphinae, within the family Columbidae. In 2002, American geneticist Beth Shapiro and colleagues analysed the DNA of the dodo for the first time. Comparison of mitochondrial cytochrome "b" and 12S rRNA sequences isolated from a tarsal of the Oxford specimen and a femur of a Rodrigues solitaire confirmed their close relationship and their placement within the Columbidae. The genetic evidence was interpreted as showing the Southeast Asian Nicobar pigeon ("Caloenas nicobarica") to be their closest living relative, followed by the crowned pigeons ("Goura") of New Guinea, and the superficially dodo-like tooth-billed pigeon ("Didunculus strigirostris") from Samoa (its scientific name refers to its dodo-like beak). This clade consists of generally ground-dwelling island endemic pigeons. The following cladogram shows the dodo's closest relationships within the Columbidae, based on Shapiro et al., 2002: A similar cladogram was published in 2007, inverting the placement of "Goura" and "Didunculus" and including the pheasant pigeon ("Otidiphaps nobilis") and the thick-billed ground pigeon ("Trugon terrestris") at the base of the clade. The DNA used in these studies was obtained from the Oxford specimen, and since this material is degraded, and no usable DNA has been extracted from subfossil remains, these findings still need to be independently verified. Based on behavioural and morphological evidence, Jolyon C. Parish proposed that the dodo and Rodrigues solitaire should be placed in the subfamily Gourinae along with the "Goura" pigeons and others, in agreement with the genetic evidence. In 2014, DNA of the only known specimen of the recently extinct spotted green pigeon ("Caloenas maculata") was analysed, and it was found to be a close relative of the Nicobar pigeon, and thus also the dodo and Rodrigues solitaire. The 2002 study indicated that the ancestors of the dodo and the solitaire diverged around the Paleogene-Neogene boundary, about 23.03 million years ago. The Mascarene Islands (Mauritius, Réunion, and Rodrigues), are of volcanic origin and are less than 10 million years old. Therefore, the ancestors of both birds probably remained capable of flight for a considerable time after the separation of their lineage. The Nicobar and spotted green pigeon were placed at the base of a lineage leading to the Raphinae, which indicates the flightless raphines had ancestors that were able to fly, were semi-terrestrial, and inhabited islands. This in turn supports the hypothesis that the ancestors of those birds reached the Mascarene islands by island hopping from South Asia. The lack of mammalian herbivores competing for resources on these islands allowed the solitaire and the dodo to attain very large sizes and flightlessness. Despite its divergent skull morphology and adaptations for larger size, many features of its skeleton remained similar to those of smaller, flying pigeons. Another large, flightless pigeon, the Viti Levu giant pigeon ("Natunaornis gigoura"), was described in 2001 from subfossil material from Fiji. It was only slightly smaller than the dodo and the solitaire, and it too is thought to have been related to the crowned pigeons. One of the original names for the dodo was the Dutch ""Walghvoghel"", first used in the journal of Dutch Vice Admiral Wybrand van Warwijck, who visited Mauritius during the Second Dutch Expedition to Indonesia in 1598. "Walghe" means "tasteless", "insipid", or "sickly", and "voghel" means "bird". The name was translated by Jakob Friedlib into German as "Walchstök" or "Walchvögel". The original Dutch report titled "Waarachtige Beschryving" was lost, but the English translation survived: Another account from that voyage, perhaps the first to mention the dodo, states that the Portuguese referred to them as penguins. The meaning may not have been derived from "penguin" (the Portuguese referred to them as ""fotilicaios"" at the time), but from "pinion", a reference to the small wings. The crew of the Dutch ship "Gelderland" referred to the bird as "Dronte" (meaning "swollen") in 1602, a name that is still used in some languages. This crew also called them "griff-eendt" and "kermisgans", in reference to fowl fattened for the Kermesse festival in Amsterdam, which was held the day after they anchored on Mauritius. The etymology of the word "dodo" is unclear. Some ascribe it to the Dutch word "dodoor" for "sluggard", but it is more probably related to "Dodaars", which means either "fat-arse" or "knot-arse", referring to the knot of feathers on the hind end. The first record of the word "Dodaars" is in Captain Willem Van West-Zanen's journal in 1602. The English writer Sir Thomas Herbert was the first to use the word "dodo" in print in his 1634 travelogue claiming it was referred to as such by the Portuguese, who had visited Mauritius in 1507. Another Englishman, Emmanuel Altham, had used the word in a 1628 letter in which he also claimed its origin was Portuguese. The name "dodar" was introduced into English at the same time as dodo, but was only used until the 18th century. As far as is known, the Portuguese never mentioned the bird. Nevertheless, some sources still state that the word "dodo" derives from the Portuguese word "doudo" (currently "doido"), meaning "fool" or "crazy". It has also been suggested that "dodo" was an onomatopoeic approximation of the bird's call, a two-note pigeon-like sound resembling "doo-doo". The Latin name "cucullatus" ("hooded") was first used by Juan Eusebio Nieremberg in 1635 as "Cygnus cucullatus", in reference to Carolus Clusius's 1605 depiction of a dodo. In his 18th-century classic work "Systema Naturae", Carl Linnaeus used "cucullatus" as the specific name, but combined it with the genus name "Struthio" (ostrich). Mathurin Jacques Brisson coined the genus name "Raphus" (referring to the bustards) in 1760, resulting in the current name "Raphus cucullatus". In 1766, Linnaeus coined the new binomial "Didus ineptus" (meaning "inept dodo"). This has become a synonym of the earlier name because of nomenclatural priority. As no complete dodo specimens exist, its external appearance, such as plumage and colouration, is hard to determine. Illustrations and written accounts of encounters with the dodo between its discovery and its extinction (1598–1662) are the primary evidence for its external appearance. According to most representations, the dodo had greyish or brownish plumage, with lighter primary feathers and a tuft of curly light feathers high on its rear end. The head was grey and naked, the beak green, black and yellow, and the legs were stout and yellowish, with black claws. A study of the few remaining feathers on the Oxford specimen head showed that they were pennaceous rather than plumaceous (downy) and most similar to those of other pigeons. Subfossil remains and remnants of the birds that were brought to Europe in the 17th century show that dodos were very large birds, up to tall. The bird was sexually dimorphic; males were larger and had proportionally longer beaks. Weight estimates have varied from study to study. In 1993, Bradley C. Livezey proposed that males would have weighed and females . Also in 1993, Andrew C. Kitchener attributed a high contemporary weight estimate and the roundness of dodos depicted in Europe to these birds having been overfed in captivity; weights in the wild were estimated to have been in the range of , and fattened birds could have weighed . A 2011 estimate by Angst and colleagues gave an average weight as low as . This has also been questioned, and there is still controversy over weight estimates. A 2016 study estimated the weight at , based on CT scans of composite skeletons. It has also been suggested that the weight depended on the season, and that individuals were fat during cool seasons, but less so during hot. The skull of the dodo differed much from those of other pigeons, especially in being more robust, the bill having a hooked tip, and in having a short cranium compared to the jaws. The upper bill was nearly twice as long as the cranium, which was short compared to those of its closest pigeon relatives. The openings of the bony nostrils were elongated along the length of the beak, and they contained no bony septum. The cranium (excluding the beak) was wider than it was long, and the frontal bone formed a dome-shape, with the highest point above the hind part of the eye sockets. The skull sloped downwards at the back. The eye sockets occupied much of the hind part of the skull. The sclerotic rings inside the eye were formed by eleven ossicles (small bones), similar to the amount in other pigeons. The mandible was slightly curved, and each half had a single fenestra (opening), as in other pigeons. The dodo had about nineteen presynsacral vertebrae (those of the neck and thorax, including three fused into a notarium), sixteen synsacral vertebrae (those of the lumbar region and sacrum), six free tail (caudal) vertebrae, and a pygostyle. The neck had well-developed areas for muscle and ligament attachment, probably to support the heavy skull and beak. On each side, it had six ribs, four of which articulated with the sternum through sternal ribs. The sternum was large, but small in relation to the body compared to those of much smaller pigeons that are able to fly. The sternum was highly pneumatic, broad, and relatively thick in cross-section. The bones of the pectoral girdle, shoulder blades, and wing bones were reduced in size compared to those of flighted pigeon, and were more gracile compared to those of the Rodrigues solitaire, but none of the individual skeletal components had disappeared. The carpometacarpus of the dodo was more robust than that of the solitaire, however. The pelvis was wider than that of the solitaire and other relatives, yet was comparable to the proportions in some smaller, flighted pigeons. Most of the leg bones were more robust than those of extant pigeons and the solitaire, but the length proportions were little different. Many of the skeletal features that distinguish the dodo and the Rodrigues solitaire, its closest relative, from pigeons have been attributed to their flightlessness. The pelvic elements were thicker than those of flighted pigeons to support the higher weight, and the pectoral region and the small wings were paedomorphic, meaning that they were underdeveloped and retained juvenile features. The skull, trunk and pelvic limbs were peramorphic, meaning that they changed considerably with age. The dodo shared several other traits with the Rodrigues solitaire, such as features of the skull, pelvis, and sternum, as well as their large size. It differed in other aspects, such as being more robust and shorter than the solitaire, having a larger skull and beak, a rounded skull roof, and smaller orbits. The dodo's neck and legs were proportionally shorter, and it did not possess an equivalent to the knob present on the solitaire's wrists. Most contemporary descriptions of the dodo are found in ship's logs and journals of the Dutch East India Company vessels that docked in Mauritius when the Dutch Empire ruled the island. These records were used as guides for future voyages. Few contemporary accounts are reliable, as many seem to be based on earlier accounts, and none were written by scientists. One of the earliest accounts, from van Warwijck's 1598 journal, describes the bird as follows: One of the most detailed descriptions is by Herbert in "A Relation of Some Yeares Travaille into Afrique and the Greater Asia" from 1634: The travel journal of the Dutch ship "Gelderland" (1601–1603), rediscovered in the 1860s, contains the only known sketches of living or recently killed specimens drawn on Mauritius. They have been attributed to the professional artist Joris Joostensz Laerle, who also drew other now-extinct Mauritian birds, and to a second, less refined artist. Apart from these sketches, it is unknown how many of the twenty or so 17th-century illustrations of the dodos were drawn from life or from stuffed specimens, which affects their reliability. Since dodos are otherwise only known from limited physcal remains and descriptions, contemporary artworks are important to reconstruct their appearance in life. While there has been an effort since the mid-19 century to list all historical illustrations of dodos, previously unknown depictions continue to be discovered occasionally. The traditional image of the dodo is of a very fat and clumsy bird, but this view may be exaggerated. The general opinion of scientists today is that many old European depictions were based on overfed captive birds or crudely stuffed specimens. It has also been suggested that the images might show dodos with puffed feathers, as part of display behaviour. The Dutch painter Roelant Savery was the most prolific and influential illustrator of the dodo, having made at least twelve depictions, often showing it in the lower corners. A famous painting of his from 1626, now called "Edwards's Dodo" as it was once owned by the ornithologist George Edwards, has since become the standard image of a dodo. It is housed in the Natural History Museum, London. The image shows a particularly fat bird and is the source for many other dodo illustrations. An Indian Mughal painting rediscovered in St. Petersburg in the 1950s shows a dodo along with native Indian birds. It depicts a slimmer, brownish bird, and its discoverer A. Iwanow and British palaeontologist Julian Hume regarded it as one of the most accurate depictions of the living dodo; the surrounding birds are clearly identifiable and depicted with appropriate colouring. It is believed to be from the 17th century and has been attributed to the Mughal painter Ustad Mansur. The bird depicted probably lived in the menagerie of the Mughal Emperor Jahangir, located in Surat, where the English traveller Peter Mundy also claimed to have seen two dodos sometime between 1628 and 1633. In 2014, another Indian illustration of a dodo was reported, but it was found to be derivative of an 1836 German illustration. All post-1638 depictions appear to be based on earlier images, around the time reports mentioning dodos became rarer. Differences in the depictions led ornithologists such as Anthonie Cornelis Oudemans and Masauji Hachisuka to speculate about sexual dimorphism, ontogenic traits, seasonal variation, and even the existence of different species, but these theories are not accepted today. Because details such as markings of the beak, the form of the tail feathers, and colouration vary from account to account, it is impossible to determine the exact morphology of these features, whether they signal age or sex, or if they even reflect reality. Hume argued that the nostrils of the living dodo would have been slits, as seen in the "Gelderland", Cornelis Saftleven, Savery's Crocker Art Gallery, and Ustad Mansur images. According to this claim, the gaping nostrils often seen in paintings indicate that taxidermy specimens were used as models. Most depictions show that the wings were held in an extended position, unlike flighted pigeons, but similar to ratites such as the ostrich and kiwi. Little is known of the behaviour of the dodo, as most contemporary descriptions are very brief. Based on weight estimates, it has been suggested the male could reach the age of 21, and the female 17. Studies of the cantilever strength of its leg bones indicate that it could run quite fast. The legs were robust and strong to support the bulk of the bird, and also made it agile and manoeuvrable in the dense, pre-human landscape. Though the wings were small, well-developed muscle scars on the bones show that they were not completely vestigial, and may have been used for display behaviour and balance; extant pigeons also use their wings for such purposes. Unlike the Rodrigues solitaire, there is no evidence that the dodo used its wings in intraspecific combat. Though some dodo bones have been found with healed fractures, it had weak pectoral muscles and more reduced wings in comparison. The dodo may instead have used its large, hooked beak in territorial disputes. Since Mauritius receives more rainfall and has less seasonal variation than Rodrigues, which would have affected the availability of resources on the island, the dodo would have less reason to evolve aggressive territorial behaviour. The Rodrigues solitaire was therefore probably the more aggressive of the two. CT scanning of a dodo skull revealed that it had a similar brain-to-body-size ratio similar to modern pigeons, indicating that dodos were probably equal in intelligence. The preferred habitat of the dodo is unknown, but old descriptions suggest that it inhabited the woods on the drier coastal areas of south and west Mauritius. This view is supported by the fact that the Mare aux Songes swamp, where most dodo remains have been excavated, is close to the sea in south-eastern Mauritius. Such a limited distribution across the island could well have contributed to its extinction. A 1601 map from the "Gelderland" journal shows a small island off the coast of Mauritius where dodos were caught. Julian Hume has suggested this island was l'île aux Benitiers in Tamarin Bay, on the west coast of Mauritius. Subfossil bones have also been found inside caves in highland areas, indicating that it once occurred on mountains. Work at the Mare aux Songes swamp has shown that its habitat was dominated by tambalacoque and "Pandanus" trees and endemic palms. The near-coastal placement and wetness of the Mare aux Songes led to a high diversity of plant species, whereas the surrounding areas were drier. Many endemic species of Mauritius became extinct after the arrival of humans, so the ecosystem of the island is badly damaged and hard to reconstruct. Before humans arrived, Mauritius was entirely covered in forests, but very little remains of them today, because of deforestation. The surviving endemic fauna is still seriously threatened. The dodo lived alongside other recently extinct Mauritian birds such as the flightless red rail, the broad-billed parrot, the Mascarene grey parakeet, the Mauritius blue pigeon, the Mauritius owl, the Mascarene coot, the Mauritian shelduck, the Mauritian duck, and the Mauritius night heron. Extinct Mauritian reptiles include the saddle-backed Mauritius giant tortoise, the domed Mauritius giant tortoise, the Mauritian giant skink, and the Round Island burrowing boa. The small Mauritian flying fox and the snail "Tropidophora carinata" lived on Mauritius and Réunion, but vanished from both islands. Some plants, such as "Casearia tinifolia" and the palm orchid, have also become extinct. A 1631 Dutch letter (long thought lost, but rediscovered in 2017) is the only account of the dodo's diet, and also mentions that it used its beak for defence. The document uses word-play to refer to the animals described, with dodos presumably being an allegory for wealthy mayors: In addition to fallen fruits, the dodo probably subsisted on nuts, seeds, bulbs, and roots. It has also been suggested that the dodo might have eaten crabs and shellfish, like their relatives the crowned pigeons. Its feeding habits must have been versatile, since captive specimens were probably given a wide range of food on the long sea journeys. Oudemans suggested that as Mauritius has marked dry and wet seasons, the dodo probably fattened itself on ripe fruits at the end of the wet season to survive the dry season, when food was scarce; contemporary reports describe the bird's "greedy" appetite. France Staub suggested that they mainly fed on palm fruits, and he attempted to correlate the fat-cycle of the dodo with the fruiting regime of the palms. Skeletal elements of the upper jaw appear to have been rhynchokinetic (movable in relation to each other), which must have affected its feeding behaviour. In extant birds, such as frugivorous (fruit-eating) pigeons, kinetic premaxillae help with consuming large food items. The beak also appears to have been able to withstand high force loads, which indicates a diet of hard food. In 2016, the first 3D endocast was made from the brain of the dodo; examination found that though the brain was similar to that of other pigeons in most respects, the dodo had a comparatively large olfactory bulb. This gave the dodo a good sense of smell, which may have aided in locating fruit and small prey. Several contemporary sources state that the dodo used Gastroliths (gizzard stones) to aid digestion. The English writer Sir Hamon L'Estrange witnessed a live bird in London and described it as follows: It is not known how the young were fed, but related pigeons provide crop milk. Contemporary depictions show a large crop, which was probably used to add space for food storage and to produce crop milk. It has been suggested that the maximum size attained by the dodo and the solitaire was limited by the amount of crop milk they could produce for their young during early growth. In 1973, the tambalacoque, also known as the dodo tree, was thought to be dying out on Mauritius, to which it is endemic. There were supposedly only 13 specimens left, all estimated to be about 300 years old. Stanley Temple hypothesised that it depended on the dodo for its propagation, and that its seeds would germinate only after passing through the bird's digestive tract. He claimed that the tambalacoque was now nearly coextinct because of the disappearance of the dodo. Temple overlooked reports from the 1940s that found that tambalacoque seeds germinated, albeit very rarely, without being abraded during digestion. Others have contested his hypothesis and suggested that the decline of the tree was exaggerated, or seeds were also distributed by other extinct animals such as "Cylindraspis" tortoises, fruit bats or the broad-billed parrot. According to Wendy Strahm and Anthony Cheke, two experts in the ecology of the Mascarene Islands, the tree, while rare, has germinated since the demise of the dodo and numbers several hundred, not 13 as claimed by Temple, hence discrediting Temple's view as to the dodo and the tree's sole survival relationship. The Brazilian ornithologist Carlos Yamashita suggested in 1997 that the broad-billed parrot may have depended on dodos and "Cylindraspis" tortoises to eat palm fruits and excrete their seeds, which became food for the parrots. "Anodorhynchus" macaws depended on now-extinct South American megafauna in the same way, but now rely on domesticated cattle for this service. As it was flightless and terrestrial and there were no mammalian predators or other kinds of natural enemy on Mauritius, the dodo probably nested on the ground. The account by François Cauche from 1651 is the only description of the egg and the call: Cauche's account is problematic, since it also mentions that the bird he was describing had three toes and no tongue, unlike dodos. This led some to believe that Cauche was describing a new species of dodo (""Didus nazarenus""). The description was most probably mingled with that of a cassowary, and Cauche's writings have other inconsistencies. A mention of a "young ostrich" taken on board a ship in 1617 is the only other reference to a possible juvenile dodo. An egg claimed to be that of a dodo is stored in the museum of East London, South Africa. It was donated by Marjorie Courtenay-Latimer, whose great aunt had received it from a captain who claimed to have found it in a swamp on Mauritius. In 2010, the curator of the museum proposed using genetic studies to determine its authenticity. It may instead be an aberrant ostrich egg. Because of the possible single-egg clutch and the bird's large size, it has been proposed that the dodo was K-selected, meaning that it produced a low number of altricial offspring, which required parental care until they matured. Some evidence, including the large size and the fact that tropical and frugivorous birds have slower growth rates, indicates that the bird may have had a protracted development period. The fact that no juvenile dodos have been found in the Mare aux Songes swamp may indicate that they produced little offspring, that they matured rapidly, that the breeding grounds were far away from the swamp, or that the risk of miring was seasonal. A 2017 study examined the histology of thin-sectioned dodo bones, modern Mauritian birds, local ecology, and contemporary accounts, to recover information about the life history of the dodo. The study suggested that dodos bred around August, after having potentially fattened themselves, corresponding with the fat and thin cycles of many vertebrates of Mauritius. The chicks grew rapidly, reaching robust, almost adult, sizes, and sexual maturity before Austral summer or the cyclone season. Adult dodos which had just bred moulted after Austral summer, around March. The feathers of the wings and tail were replaced first, and the moulting would have completed at the end of July, in time for the next breeding season. Different stages of moulting may also account for inconsistencies in contemporary descriptions of dodo plumage. Mauritius had previously been visited by Arab vessels in the Middle Ages and Portuguese ships between 1507 and 1513, but was settled by neither. No records of dodos by these are known, although the Portuguese name for Mauritius, "Cerne (swan) Island", may have been a reference to dodos. The Dutch Empire acquired Mauritius in 1598, renaming it after Maurice of Nassau, and it was used for the provisioning of trade vessels of the Dutch East India Company henceforward. The earliest known accounts of the dodo were provided by Dutch travelers during the Second Dutch Expedition to Indonesia, led by admiral Jacob van Neck in 1598. They appear in reports published in 1601, which also contain the first published illustration of the bird. Since the first sailors to visit Mauritius had been at sea for a long time, their interest in these large birds was mainly culinary. The 1602 journal by Willem Van West-Zanen of the ship "Bruin-Vis" mentions that 24–25 dodos were hunted for food, which were so large that two could scarcely be consumed at mealtime, their remains being preserved by salting. An illustration made for the 1648 published version of this journal, showing the killing of dodos, a dugong, and possibly Mascarene grey parakeets, was captioned with a Dutch poem, here in Hugh Strickland's 1848 translation: Some early travellers found dodo meat unsavoury, and preferred to eat parrots and pigeons; others described it as tough but good. Some hunted dodos only for their gizzards, as this was considered the most delicious part of the bird. Dodos were easy to catch, but hunters had to be careful not to be bitten by their powerful beaks. The appearance of the dodo and the red rail led Peter Mundy to speculate, 230 years before Charles Darwin's theory of evolution: The dodo was found interesting enough that living specimens were sent to Europe and the East. The number of transported dodos that reached their destinations alive is uncertain, and it is unknown how they relate to contemporary depictions and the few non-fossil remains in European museums. Based on a combination of contemporary accounts, paintings, and specimens, Julian Hume has inferred that at least eleven transported dodos reached their destinations alive. Hamon L'Estrange's description of a dodo that he saw in London in 1638 is the only account that specifically mentions a live specimen in Europe. In 1626 Adriaen van de Venne drew a dodo that he claimed to have seen in Amsterdam, but he did not mention if it were alive, and his depiction is reminiscent of Savery's "Edwards's Dodo". Two live specimens were seen by Peter Mundy in Surat, India, between 1628 and 1634, one of which may have been the individual painted by Ustad Mansur around 1625. In 1628, Emmanuel Altham visited Mauritius and sent a letter to his brother in England: Whether the dodo survived the journey is unknown, and the letter was destroyed by fire in the 19th century. The earliest known picture of a dodo specimen in Europe is from a collection of paintings depicting animals in the royal menagerie of Emperor Rudolph II in Prague. This collection includes paintings of other Mauritian animals as well, including a red rail. The dodo, which may be a juvenile, seems to have been dried or embalmed, and had probably lived in the emperor's zoo for a while together with the other animals. That whole stuffed dodos were present in Europe indicates they had been brought alive and died there; it is unlikely that taxidermists were on board the visiting ships, and spirits were not yet used to preserve biological specimens. Most tropical specimens were preserved as dried heads and feet. One dodo was reportedly sent as far as Nagasaki, Japan in 1647, but it was long unknown whether it arrived. Contemporary documents first published in 2014 proved the story, and showed that it had arrived alive. It was meant as a gift, and, despite its rarity, was considered of equal value to a white deer and a bezoar stone. It is the last recorded live dodo in captivity. Like many animals that evolved in isolation from significant predators, the dodo was entirely fearless of humans. This fearlessness and its inability to fly made the dodo easy prey for sailors. Although some scattered reports describe mass killings of dodos for ships' provisions, archaeological investigations have found scant evidence of human predation. Bones of at least two dodos were found in caves at Baie du Cap that sheltered fugitive slaves and convicts in the 17th century, which would not have been easily accessible to dodos because of the high, broken terrain. The human population on Mauritius (an area of ) never exceeded 50 people in the 17th century, but they introduced other animals, including dogs, pigs, cats, rats, and crab-eating macaques, which plundered dodo nests and competed for the limited food resources. At the same time, humans destroyed the forest habitat of the dodos. The impact of the introduced animals on the dodo population, especially the pigs and macaques, is today considered more severe than that of hunting. Rats were perhaps not much of a threat to the nests, since dodos would have been used to dealing with local land crabs. It has been suggested that the dodo may already have been rare or localised before the arrival of humans on Mauritius, since it would have been unlikely to become extinct so rapidly if it had occupied all the remote areas of the island. A 2005 expedition found subfossil remains of dodos and other animals killed by a flash flood. Such mass mortalities would have further jeopardised a species already in danger of becoming extinct. Yet the fact that the dodo survived hundreds of years of volcanic activity and climactic changes shows the bird was resilient within its ecosystem. Some controversy surrounds the date of their extinction. The last widely accepted record of a dodo sighting is the 1662 report by shipwrecked mariner Volkert Evertsz of the Dutch ship "Arnhem", who described birds caught on a small islet off Mauritius, now suggested to be Amber Island: The dodos on this islet may not necessarily have been the last members of the species. The last claimed sighting of a dodo was reported in the hunting records of Isaac Johannes Lamotius in 1688. Statistical analysis of these records by Roberts and Solow gives a new estimated extinction date of 1693, with a 95% confidence interval of 1688–1715. The authors also pointed out that because the last sighting before 1662 was in 1638, the dodo was probably already quite rare by the 1660s, and thus a disputed report from 1674 by an escaped slave cannot be dismissed out of hand. Cheke pointed out that some descriptions after 1662 use the names "Dodo" and "Dodaers" when referring to the red rail, indicating that they had been transferred to it after the disappearance of the dodo itself. Cheke therefore points to the 1662 description as the last credible observation. A 1668 account by English traveller John Marshall, who used the names "Dodo" and "Red Hen" interchangeably for the red rail, mentioned that the meat was "hard", which echoes the description of the meat in the 1681 account. Even the 1662 account has been questioned by the writer Errol Fuller, as the reaction to distress cries matches what was described for the red rail. Until this explanation was proposed, a description of "dodos" from 1681 was thought to be the last account, and that date still has proponents. Recently accessible Dutch manuscripts indicate that no dodos were seen by settlers in 1664–1674. It is unlikely the issue will ever be resolved, unless late reports mentioning the name alongside a physical description are rediscovered. The IUCN Red List accepts Cheke's rationale for choosing the 1662 date, taking all subsequent reports to refer to red rails. In any case, the dodo was probably extinct by 1700, about a century after its discovery in 1598. The Dutch left Mauritius in 1710, but by then the dodo and most of the large terrestrial vertebrates there had become extinct. Even though the rareness of the dodo was reported already in the 17th century, its extinction was not recognised until the 19th century. This was partly because, for religious reasons, extinction was not believed possible until later proved so by Georges Cuvier, and partly because many scientists doubted that the dodo had ever existed. It seemed altogether too strange a creature, and many believed it a myth. The bird was first used as an example of human-induced extinction in "Penny Magazine" in 1833, and has since been referred to as an "icon" of extinction. The only extant remains of dodos taken to Europe in the 17th century are a dried head and foot in the Oxford University Museum of Natural History, a foot once housed in the British Museum but now lost, a skull in the University of Copenhagen Zoological Museum, and an upper jaw in the National Museum, Prague. The last two were rediscovered and identified as dodo remains in the mid-19th century. Several stuffed dodos were also mentioned in old museum inventories, but none are known to have survived. Apart from these remains, a dried foot, which belonged to the Dutch professor Pieter Pauw, was mentioned by Carolus Clusius in 1605. Its provenance is unknown, and it is now lost, but it may have been collected during the Van Neck voyage. The only known soft tissue remains, the Oxford head (specimen OUM 11605) and foot, belonged to the last known stuffed dodo, which was first mentioned as part of the Tradescant collection in 1656 and was moved to the Ashmolean Museum in 1659. It has been suggested that this might be the remains of the bird that Hamon L'Estrange saw in London, the bird sent by Emanuel Altham, or a donation by Thomas Herbert. Since the remains do not show signs of having been mounted, the specimen might instead have been preserved as a study skin. In 2018, it was reported that scans of the Oxford dodo's head showed that its skin and bone contained lead shot, pellets which were used to hunt birds in the 17th century. This indicates that the Oxford dodo was shot either before being transported to Britain, or some time after arriving. The circumstances of its killing are unknown, and the pellets are to be examined to identify where the lead was mined from. Many sources state that the Ashmolean Museum burned the stuffed dodo around 1755 because of severe decay, saving only the head and leg. Statute 8 of the museum states "That as any particular grows old and perishing the keeper may remove it into one of the closets or other repository; and some other to be substituted." The deliberate destruction of the specimen is now believed to be a myth; it was removed from exhibition to preserve what remained of it. This remaining soft tissue has since degraded further; the head was dissected by Strickland and Melville, separating the skin from the skull in two halves. The foot is in a skeletal state, with only scraps of skin and tendons. Very few feathers remain on the head. It is probably a female, as the foot is 11% smaller and more gracile than the London foot, yet appears to be fully grown. The specimen was exhibited at the Oxford museum from at least the 1860s and until 1998, where-after it was mainly kept in storage to prevent damage. Casts of the head can today be found in many museums worldwide. The dried London foot, first mentioned in 1665, and transferred to the British Museum in the 18th century, was displayed next to Savery's "Edwards's Dodo" painting until the 1840s, and it too was dissected by Strickland and Melville. It was not posed in a standing posture, which suggests that it was severed from a fresh specimen, not a mounted one. By 1896 it was mentioned as being without its integuments, and only the bones are believed to remain today, though its present whereabouts are unknown. The Copenhagen skull (specimen ZMUC 90-806) is known to have been part of the collection of Bernardus Paludanus in Enkhuizen until 1651, when it was moved to the museum in Gottorf Castle, Schleswig. After the castle was occupied by Danish forces in 1702, the museum collection was assimilated into the Royal Danish collection. The skull was rediscovered by J. T. Reinhardt in 1840. Based on its history, it may be the oldest known surviving remains of a dodo brought to Europe in the 17th century. It is shorter than the Oxford skull, and may have belonged to a female. It was mummified, but the skin has perished. The front part of a skull (specimen NMP P6V-004389) in the National Museum of Prague was found in 1850 among the remains of the Böhmisches Museum. Other elements supposedly belonging to this specimen have been listed in the literature, but it appears only the partial skull was ever present (a partial right limb in the museum appears to be from a Rodrigues solitaire). It may be what remains of one of the stuffed dodos known to have been at the menagerie of Emperor Rudolph II, possibly the specimen painted by Hoefnagel or Savery there. Until 1860, the only known dodo remains were the four incomplete 17th-century specimens. Philip Burnard Ayres found the first subfossil bones in 1860, which were sent to Richard Owen at the British Museum, who did not publish the findings. In 1863, Owen requested the Mauritian Bishop Vincent Ryan to spread word that he should be informed if any dodo bones were found. In 1865, George Clark, the government schoolmaster at Mahébourg, finally found an abundance of subfossil dodo bones in the swamp of Mare aux Songes in Southern Mauritius, after a 30-year search inspired by Strickland and Melville's monograph. In 1866, Clark explained his procedure to "The Ibis", an ornithology journal: he had sent his coolies to wade through the centre of the swamp, feeling for bones with their feet. At first they found few bones, until they cut away herbage that covered the deepest part of the swamp, where they found many fossils. Harry Pasley Higginson, a railway engineer from Yorkshire, reports discovering the Mare aux Songes bones at the same time as Clark and there is some dispute over who found them first. Higginson sent boxes of these bones to Liverpool, Leeds and York museums. The swamp yielded the remains of over 300 dodos, but very few skull and wing bones, possibly because the upper bodies were washed away or scavenged while the lower body was trapped. The situation is similar to many finds of moa remains in New Zealand marshes. Most dodo remains from the Mare aux Songes have a medium to dark brown colouration. Clark's reports about the finds rekindled interest in the bird. Sir Richard Owen and Alfred Newton both wanted to be first to describe the post-cranial anatomy of the dodo, and Owen bought a shipment of dodo bones originally meant for Newton, which led to rivalry between the two. Owen described the bones in "Memoir on the Dodo" in October 1866, but erroneously based his reconstruction on the "Edwards's Dodo" painting by Savery, making it too squat and obese. In 1869 he received more bones and corrected its stance, making it more upright. Newton moved his focus to the Réunion solitaire instead. The remaining bones not sold to Owen or Newton were auctioned off or donated to museums. In 1889, Théodor Sauzier was commissioned to explore the "historical souvenirs" of Mauritius and find more dodo remains in the Mare aux Songes. He was successful, and also found remains of other extinct species. In 2005, after a hundred years of neglect, a part of the Mare aux Songes swamp was excavated by an international team of researchers (International Dodo Research Project). To prevent malaria, the British had covered the swamp with hard core during their rule over Mauritius, which had to be removed. Many remains were found, including bones of at least 17 dodos in various stages of maturity (though no juveniles), and several bones obviously from the skeleton of one individual bird, which have been preserved in their natural position. These findings were made public in December 2005 in the Naturalis museum in Leiden. 63% of the fossils found in the swamp belonged to turtles of the extinct genus "Cylindraspis", and 7.1% belonged to dodos, which had been deposited within several centuries, 4,000 years ago. Subsequent excavations suggested that dodos and other animals became mired in the Mare aux Songes while trying to reach water during a long period of severe drought about 4,200 years ago. Furthermore, cyanobacteria thrived in the conditions created by the excrements of animals gathered around the swamp, which died of intoxication, dehydration, trampling, and miring. Though many small skeletal elements were found during the recent excavations of the swamp, few were found during the 19th century, probably owing to the employment of less refined methods when collecting. Louis Etienne Thirioux, an amateur naturalist at Port Louis, also found many dodo remains around 1900 from several locations. They included the first articulated specimen, which is the first subfossil dodo skeleton found outside the Mare aux Songes, and the only remains of a juvenile specimen, a now lost tarsometatarsus. The former specimen was found in 1904 in a cave near Le Pouce mountain, and is the only known complete skeleton of an individual dodo. Thirioux donated the specimen to the Museum Desjardins (now Natural History Museum at Mauritius Institute). Thrioux's heirs sold a second mounted composite skeleton (composed of at least two skeletons, with a mainly reconstructed skull) to the Durban Museum of Natural Science in South Africa in 1918. Together, these two skeletons represent the most completely known dodo remains, including bone elements previously unrecorded (such as knee-caps and various wing bones). Though some contemporary writers noted the importance of Thrioux's specimens, they were not scientifically studied, and were largely forgotten until 2011, when sought out by a group of researchers. The mounted skeletons were laser scanned, from which 3-D models were reconstructed, which became the basis of a 2016 monograph about the osteology of the dodo. In 2006, explorers discovered a complete skeleton of a dodo in a lava cave in Mauritius. This was only the second associated skeleton of an individual specimen ever found, and the only one in recent times. Worldwide, 26 museums have significant holdings of dodo material, almost all found in the Mare aux Songes. The Natural History Museum, American Museum of Natural History, Cambridge University Museum of Zoology, the Senckenberg Museum, and others have almost complete skeletons, assembled from the dissociated subfossil remains of several individuals. In 2011, a wooden box containing dodo bones from the Edwardian era was rediscovered at the Grant Museum at University College London during preparations for a move. They had been stored with crocodile bones until then. The supposed "white dodo" (or "solitaire") of Réunion is now considered an erroneous conjecture based on contemporary reports of the Réunion ibis and 17th-century paintings of white, dodo-like birds by Pieter Withoos and Pieter Holsteyn that surfaced in the 19th century. The confusion began when Willem Ysbrandtszoon Bontekoe, who visited Réunion around 1619, mentioned fat, flightless birds that he referred to as "Dod-eersen" in his journal, though without mentioning their colouration. When the journal was published in 1646, it was accompanied by an engraving of a dodo from Savery's "Crocker Art Gallery sketch". A white, stocky, and flightless bird was first mentioned as part of the Réunion fauna by Chief Officer J. Tatton in 1625. Sporadic mentions were subsequently made by Sieur Dubois and other contemporary writers. Baron Edmond de Sélys Longchamps coined the name "Raphus solitarius" for these birds in 1848, as he believed the accounts referred to a species of dodo. When 17th-century paintings of white dodos were discovered by 19th-century naturalists, it was assumed they depicted these birds. Oudemans suggested that the discrepancy between the paintings and the old descriptions was that the paintings showed females, and that the species was therefore sexually dimorphic. Some authors also believed the birds described were of a species similar to the Rodrigues solitaire, as it was referred to by the same name, or even that there were white species of both dodo and solitaire on the island. The Pieter Withoos painting, which was discovered first, appears to be based on an earlier painting by Pieter Holsteyn, three versions of which are known to have existed. According to Hume, Cheke, and Valledor de Lozoya, it appears that all depictions of white dodos were based on Roelant Savery's painting "Landscape with Orpheus and the animals", or on copies of it. The painting has generally been dated to 1611, though a post-1614, or even post-1626, date has also been proposed. The painting shows a whitish specimen and was apparently based on a stuffed specimen then in Prague; a "walghvogel" described as having a "dirty off-white colouring" was mentioned in an inventory of specimens in the Prague collection of the Holy Roman Emperor Rudolf II, to whom Savery was contracted at the time (1607–1611). Savery's several later images all show greyish birds, possibly because he had by then seen another specimen. Cheke and Hume believe the painted specimen was white, owing to albinism. Valledor de Lozoya has instead suggested that the light plumage was a juvenile trait, a result of bleaching of old taxidermy specimens, or simply artistic license. In 1987, scientists described fossils of a recently extinct species of ibis from Réunion with a relatively short beak, "Borbonibis latipes", before a connection to the solitaire reports had been made. Cheke suggested to one of the authors, Francois Moutou, that the fossils may have been of the Réunion solitaire, and this suggestion was published in 1995. The ibis was reassigned to the genus "Threskiornis", now combined with the specific epithet "" from the binomial "R. solitarius". Birds of this genus are also white and black with slender beaks, fitting the old descriptions of the Réunion solitaire. No fossil remains of dodo-like birds have ever been found on the island. The dodo's significance as one of the best-known extinct animals and its singular appearance led to its use in literature and popular culture as a symbol of an outdated concept or object, as in the expression "dead as a dodo," which has come to mean unquestionably dead or obsolete. Similarly, the phrase "to go the way of the dodo" means to become extinct or obsolete, to fall out of common usage or practice, or to become a thing of the past. "Dodo" is also a slang term for a stupid, dull-witted person, as it was said to be stupid and easily caught. The dodo appears frequently in works of popular fiction, and even before its extinction, it was featured in European literature, as symbol for exotic lands, and of gluttony, due to its apparent fatness. In 1865, the same year that George Clark started to publish reports about excavated dodo fossils, the newly vindicated bird was featured as a character in Lewis Carroll's "Alice's Adventures in Wonderland". It is thought that he included the dodo because he identified with it and had adopted the name as a nickname for himself because of his stammer, which made him accidentally introduce himself as "Do-do-dodgson", his legal surname. Carroll and the girl who served as inspiration for Alice, Alice Liddell, had enjoyed visiting the Oxford museum to see the dodo remains there. The book's popularity made the dodo a well-known icon of extinction. The dodo is used as a mascot for many kinds of products, especially in Mauritius. It appears as a supporter on the coat of arms of Mauritius, on Mauritius coins, is used as a watermark on all Mauritian rupee banknotes, and features as the background of the Mauritian immigration form. A smiling dodo is the symbol of the Brasseries de Bourbon, a popular brewer on Réunion, whose emblem displays the white species once thought to have lived there. The dodo is used to promote the protection of endangered species by environmental organisations, such as the Durrell Wildlife Conservation Trust and the Durrell Wildlife Park. The Center for Biological Diversity gives an annual 'Rubber Dodo Award', to "those who have done the most to destroy wild places, species and biological diversity". In 2011, the nephiline spider "Nephilengys dodo", which inhabits the same woods as the dodo once did, was named after the bird to raise awareness of the urgent need for protection of the Mauritius biota. Two species of ant from Mauritius have been named after the dodo: "Pseudolasius dodo" in 1946 and "Pheidole dodo" in 2013. A species of isopod from a coral reef off Réunion was named "Hansenium dodo" in 1991. The name dodo has been used by scientists naming genetic elements, honoring the dodo's flightless nature. A fruitfly gene within a region of a chromosome required for flying ability was named "dodo". In addition, a defective transposable element family from "Phytophthora infestans" was named "DodoPi" as it contained mutations that eliminated the element's ability to jump to new locations in a chromosome. In 2009, a previously unpublished 17th-century Dutch illustration of a dodo went for sale at Christie's and was expected to sell for £6,000. It is unknown whether the illustration was based on a specimen or on a previous image, and the artist is unidentified. It sold for £44,450. The poet Hilaire Belloc included the following poem about the dodo in his "Bad Child's Book of Beasts" from 1896:
https://en.wikipedia.org/wiki?curid=8420
Sideroxylon grandiflorum Sideroxylon grandiflorum, known as tambalacoque or dodo tree, is a long-lived tree in the family Sapotaceae, endemic to Mauritius. It is valued for its timber. The "Sideroxylon grandiflorum" fruit is analogous to the peach. They are both termed drupes because both have a hard endocarp, or pit, surrounding the seed, with the endocarp naturally splitting along a fracture line during germination. In 1973, it was thought that this species was dying out. There were supposedly only 13 specimens left, all estimated to be about 300 years old. The true age could not be determined because tambalacoque has no growth rings. Stanley Temple hypothesized that the dodo, which became extinct in the 17th century, ate tambalacoque fruits, and only by passing through the digestive tract of the dodo could the seeds germinate. Temple (1977) force-fed seventeen tambalacoque fruits to wild turkeys. Seven of the fruits were crushed by the bird's gizzard. The remaining ten were either regurgitated or passed with the bird's feces. Temple planted the remaining ten fruits and three germinated. Temple did not try to germinate any seeds from control fruits not fed to turkeys so the effect of feeding fruits to turkeys was unclear. Reports made on tambalacoque seed germination by Hill (1941) and King (1946) found the seeds germinated without abrading. Temple's hypothesis that the tree required the dodo has been contested. Others have suggested the decline of the tree was exaggerated, or that other extinct animals may also have been distributing the seeds, such as tortoises, fruit bats or the broad-billed parrot. Wendy Strahm and Anthony Cheke, two experts in Mascarene ecology, claim that while a rare tree, it has germinated since the demise of the dodo and numbers a few hundred, not 13. The difference in numbers is because young trees are not distinct in appearance and may easily be confused with similar species. The decline of the tree may possibly be due to introduction of domestic pigs and crab-eating macaques and competition with introduced plants. Catling (2001) in a summary cites Owadally and Temple (1979), and Witmer (1991). Hershey (2004) reviewed the flaws in Temple's dodo-tambalacoque hypothesis. In 2004, Botanical Society of America's Plant Science Bulletin disputed Dr. Temple's research as flawed which published evidence as to why the dodo's extinction did not directly cause the increasing disappearance of young trees including suggestion that tortoises would have been more likely to disperse the seeds than dodo hence discrediting Temple's view as to the dodo and the tree's sole survival relationship. This dodo tree is highly valued for its wood in Mauritius, which has led some foresters to scrape the pits by hand to make them sprout and grow.
https://en.wikipedia.org/wiki?curid=8421
Dwight Schultz William Dwight Schultz (born November 24, 1947) is an American actor and voice actor. He is known for his roles as Captain "Howling Mad" Murdock on the 1980s action series "The A-Team" and as Reginald Barclay in "", "", and the film "". He is also known in animation as the mad scientist Dr. Animo in the "Ben 10" series, Chef Mung Daal in the children's animated series "Chowder", and Eddie the Squirrel in "CatDog". Schultz was born in Baltimore, Maryland, of German descent and is a Roman Catholic. He attended Calvert Hall College High School and Towson University. Schultz's breakthrough role was the character of Captain "Howling Mad" Murdock on "The A-Team". He subsequently appeared in several films, including "The Fan" (1981), and he starred in "Fat Man and Little Boy" (1989) as J. Robert Oppenheimer. In the early 1990s, he had a recurring role as Lieutenant Reginald Barclay in "", and he reprised the role in "" and the film "". He played in the 1992 television film "Child of Rage", starring opposite Mel Harris as a compassionate couple who adopt a troubled girl who has been sexually abused. In November 2009, Schultz confirmed that he and former "A-Team" co-star Dirk Benedict would make cameo appearances in the feature film "The A-Team". Schultz hosted a conservative talk radio podcast called "Howling Mad Radio" which ended in March 2009. He has also guest-hosted on numerous occasions for Michael Savage on "The Savage Nation", Jerry Doyle on "The Jerry Doyle Show", and Rusty Humphries on "The Rusty Humphries Show". Schultz married actress Wendy Fulton in 1983. Their daughter Ava (born 1987) serves in the Marines. Schultz is Christian and a conservative, and he began regular appearances on "The Glazov Gang" in 2012, an Internet political talk show hosted by Jamie Glazov, managing editor of FrontPage Magazine. He also posts political commentaries and podcasts on his official fansite.
https://en.wikipedia.org/wiki?curid=8425
Density The density (more precisely, the volumetric mass density; also known as specific mass), of a substance is its mass per unit volume. The symbol most often used for density is "ρ" (the lower case Greek letter rho), although the Latin letter "D" can also be used. Mathematically, density is defined as mass divided by volume: where "ρ" is the density, "m" is the mass, and "V" is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume, although this is scientifically inaccurate – this quantity is more specifically called specific weight. For a pure substance the density has the same numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium and iridium are the densest known elements at standard conditions for temperature and pressure. To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a relative density less than one means that the substance floats in water. The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid. This causes it to rise relative to more dense unheated material. The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass. In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass; but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (Εύρηκα! Greek "I have found it"). As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment. The story first appeared in written form in Vitruvius' "books of architecture", two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time. From the equation for density ("ρ" = "m"/"V"), mass density has units of mass divided by volume. As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m3) and the cgs unit of gram per cubic centimetre (g/cm3) are probably the most commonly used units for density. One g/cm3 is equal to 1000 kg/m3. One cubic centimetre (abbreviation cc) is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density. A number of techniques as well as standards exist for the measurement of density of materials. Such techniques include the use of a hydrometer (a buoyancy method for liquids), Hydrostatic balance (a buoyancy method for liquids and solids), immersed body method (a buoyancy method for liquids), pycnometer (liquids and solids), air comparison pycnometer (solids), oscillating densitometer (liquids), as well as pour and tap (solids). However, each individual method or technique measures different types of density (e.g. bulk density, skeletal density, etc.), and therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question. The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance; the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object. If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: formula_2, where formula_3 is an elementary volume at position formula_4. The mass of the body then can be expressed as In practice, bulk materials such as sugar, sand, or snow contain voids. Many materials exist in nature as flakes, pellets, or granules. Voids are regions which contain something other than the considered material. Commonly the void is air, but it could also be vacuum, liquid, solid, or a different gas or gaseous mixture. The bulk volume of a material—inclusive of the void fraction—is often obtained by a simple measurement (e.g. with a calibrated measuring cup) or geometrically from known dimensions. Mass divided by "bulk" volume determines bulk density. This is not the same thing as volumetric mass density. To determine volumetric mass density, one must first discount the volume of the void fraction. Sometimes this can be determined by geometrical reasoning. For the close-packing of equal spheres the non-void fraction can be at most about 74%. It can also be determined empirically. Some bulk materials, however, such as sand, have a "variable" void fraction which depends on how the material is agitated or poured. It might be loose or compact, with more or less air space depending on handling. In practice, the void fraction is not necessarily air, or even gaseous. In the case of sand, it could be water, which can be advantageous for measurement as the void fraction for sand saturated in water—once any air bubbles are thoroughly driven out—is potentially more consistent than dry sand measured with an air void. In the case of non-compact materials, one must also take care in determining the mass of the material sample. If the material is under pressure (commonly ambient air pressure at the earth's surface) the determination of mass from a measured sample weight might need to account for buoyancy effects due to the density of the void constituent, depending on how the measurement was conducted. In the case of dry sand, sand is so much denser than air that the buoyancy effect is commonly neglected (less than one part in one thousand). Mass change upon displacing one void material with another while maintaining constant volume can be used to estimate the void fraction, if the difference in density of the two voids materials is reliably known. In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C; similar behavior is observed in silicon at low temperatures. The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10−6 bar−1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10−5 K−1. This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius. In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is where is the molar mass, is the pressure, is the universal gas constant, and is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature. In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is : where formula_8 is the density at a reference temperature, formula_9 is the thermal expansion coefficient of the material at temperatures close to formula_10. The density of a solution is the sum of mass (massic) concentrations of the components of that solution. Mass (massic) concentration of each given component ρi in a solution sums to density of the solution. Expressed as a function of the densities of pure components of the mixture and their volume participation, it allows the determination of excess molar volumes: provided that there is no interaction between the components. Knowing the relation between excess volumes and activity coefficients of the components, one can determine the activity coefficients. The SI unit for density is: The litre and metric tons are not part of the SI, but are acceptable for use with it, leading to the following units: Densities using the following metric units all have exactly the same numerical value, one thousandth of the value in (kg/m3). Liquid water has a density of about 1 kg/dm3, making any of these SI units numerically convenient to use as most solids and liquids have densities between 0.1 and 20 kg/dm3. In US customary units density can be stated in: Imperial units differing from the above (as the Imperial gallon and bushel differ from the US units) in practice are rarely used, though found in older documents. The Imperial gallon was based on the concept that an Imperial fluid ounce of water would have a mass of one Avoirdupois ounce, and indeed 1 g/cm3 ≈ 1.00224129 ounces per Imperial fluid ounce = 10.0224129 pounds per Imperial gallon. The density of precious metals could conceivably be based on Troy ounces and pounds, a possible cause of confusion.
https://en.wikipedia.org/wiki?curid=8429
Dave Barry David McAlister Barry (born July 3, 1947) is an American author and columnist who wrote a nationally syndicated humor column for the "Miami Herald" from 1983 to 2005. He has also written numerous books of humor and parody, as well as comic novels. Barry's honors include the Pulitzer Prize for Commentary (1988) and the Walter Cronkite Award for Excellence in Journalism (2005). Barry has defined a sense of humor as "a measurement of the extent to which we realize that we are trapped in a world almost totally devoid of reason. Laughter is how we express the anxiety we feel at this knowledge." Barry was born in Armonk, New York, where his father, David, was a Presbyterian minister. He was educated at Wampus Elementary School, Harold C. Crittenden Junior High School (both in Armonk), and Pleasantville High School, where he was elected "Class Clown" in 1965. He earned a Bachelor of Arts degree in English from Haverford College in 1969. As an alumnus of a Quaker-affiliated college, he avoided military service during the Vietnam War by registering as a religious conscientious objector. Notwithstanding his father's vocation, Barry decided "early on" that he was an atheist. He said, "The problem with writing about religion is that you run the risk of offending sincerely religious people, and then they come after you with machetes." Barry began his journalism career in 1971, working as a general-assignment reporter for the "Daily Local News" in West Chester, Pennsylvania, near his alma mater, Haverford College. He covered local government and civic events and was promoted to City Editor after about two years. He also started writing a weekly humor column for the paper and began to develop his unique style. He remained at the newspaper through 1974. He then worked briefly as a copy editor at the Associated Press's Philadelphia bureau before joining Burger Associates, a consulting firm. At Burger, he taught effective writing to business people. In his own words, he "spent nearly eight years trying to get various businesspersons to...stop writing things like 'Enclosed please find the enclosed enclosures,' but...eventually realized that it was hopeless." In 1981 he wrote a humorous guest column in the "Philadelphia Inquirer" about watching the birth of his son, which attracted the attention of Gene Weingarten, then an editor of the "Miami Herald"s Sunday magazine "Tropic". Weingarten hired Barry as a humor columnist in 1983. Barry's column was syndicated nationally. Barry won a Pulitzer Prize for Commentary in 1988 for "his consistently effective use of humor as a device for presenting fresh insights into serious concerns." Barry's first novel, "Big Trouble", was published in 1999. The book was adapted into a motion picture directed by Barry Sonnenfeld and starring Tim Allen, Rene Russo, and Patrick Warburton, with a cameo by Barry (deleted in post-production). The movie was originally due for release in September 2001 but was postponed following the September 11, 2001, attacks because the story involved smuggling a nuclear weapon onto an airplane. The film was released in April 2002. In response to a column in which Barry mocked the cities of Grand Forks, North Dakota, and East Grand Forks, Minnesota, for calling themselves the "Grand Cities", Grand Forks named a sewage pumping station after Barry in January 2002. Barry traveled to Grand Forks for the dedication ceremony. Articles written by Barry have appeared in publications such as "Boating", "Home Office Computing", and "Reader's Digest", in addition to the "Chicken Soup for the Soul" inspirational book series. Two of his articles have been included in the "Best American Sportswriting" series. One of his columns was used as the introduction to the book "Pirattitude!: So You Wanna Be a Pirate? Here's How!" (), a follow-up to Barry's role in publicizing International Talk Like a Pirate Day. His books have frequently appeared on the New York Times Best Seller List. On October 31, 2004, Barry announced that he would be taking an indefinite leave of absence of at least a year from his weekly column to spend more time with his family. In December 2005, Barry said in an interview with "Editor and Publisher" that he would not resume his weekly column, although he would continue such features as his yearly gift guide, his year-in-review feature, and his blog, as well as an occasional article or column. In 2005, Barry won the Walter Cronkite Award for Excellence in Journalism. On Sunday, September 22, 2013, the opening night of the 15th annual Fall for the Book festival in Fairfax, Virginia, Barry was awarded the event's highest honor, the Fairfax Prize, honoring outstanding literary achievement, presented by the Fairfax Library Foundation. From 1993 to 1997, CBS broadcast the sitcom "Dave's World" based on the books "Dave Barry Turns 40" and "Dave Barry's Greatest Hits". The show starred Harry Anderson as Barry and DeLane Matthews as his wife Beth. In an early episode, Barry appeared in a cameo role. After four seasons, the program was canceled shortly after being moved from its "coveted" Monday night slot to the "Friday night death slot," so named because of its association with low viewership. During college, Barry was in a band called the Federal Duck. While at the "Miami Herald", he and several of his colleagues created a band called the Urban Professionals, with Barry on lead guitar and vocals. They performed an original song called "The Tupperware Song" at the Tupperware headquarters in Orlando, Florida. Beginning in 1992, Barry played lead guitar in the Rock Bottom Remainders, a rock band made up of published authors. "(Remainder" is a publishing term for a book that doesn't sell.) The band was founded by Barry's sister-in-law, Kathi Kamen Goldmark, for an American Booksellers Association convention, and has also included Stephen King, Amy Tan, Ridley Pearson, Scott Turow, Mitch Albom, Roy Blount Jr., Barbara Kingsolver, Matt Groening, and Barry's brother Sam, among others. The band's members "are not musically skilled, but they are extremely loud," according to Barry. Several high-profile musicians, including Al Kooper, Warren Zevon, and Roger McGuinn, have performed with the band, and Bruce Springsteen sat in at least once. The band's road tour resulted in the book "Mid-Life Confidential: The Rock Bottom Remainders Tour America with Three Chords and an Attitude". The Rock Bottom Remainders disbanded in 2012 following Goldmark's death from breast cancer. They have reunited several times, performing at the Tucson Festival of books in 2016 and 2018. Beginning in 1984, Barry and "Tropic" editors Gene Weingarten and Tom Shroder have organized the Tropic Hunt (now the Herald Hunt), an annual puzzlehunt in Miami. A Washington, D.C., spinoff, the Post Hunt, began in 2008. Barry has run several mock campaigns for President of the United States, running on a libertarian platform. He has also written for the Libertarian Party's national newsletter. The screen adaptation of Barry's book "Dave Barry's Complete Guide to Guys" was released in 2005; it is available on DVD. Barry married Lois Ann Shelnutt in 1969. He next married Beth Lenox, in 1976. Barry and Lenox worked together at the "Daily Local News", where they began their journalism careers on the same day in September 1971; they had one child, Robert, born October 8, 1980. Barry and Lenox divorced in 1993. Barry experienced tragedy in his family; his father David W and his youngest brother suffered alcoholism, and his father died in 1984, his sister Mary Katherine was institutionalized for schizophrenia, and his mother died by suicide in 1987. In 1996, Barry married "Miami Herald" sportswriter Michelle Kaufman; they had a daughter, Sophie, in 2000. Barry has had dogs named Goldie, Earnest, Zippy, and now Lucy. All have been mentioned regularly in Barry's columns.
https://en.wikipedia.org/wiki?curid=8432
David Angell David Lawrence Angell (April 10, 1946 – September 11, 2001) was an American screenwriter and television producer. Angell won multiple Emmy Awards as the creator and executive producer, along with Peter Casey and David Lee, of the sitcoms "Wings" and "Frasier". Angell and his wife Lynn both died heading home from their vacation on Cape Cod aboard American Airlines Flight 11, the first plane to hit the World Trade Center during the September 11 attacks. Angell was born in Providence, Rhode Island, to Henry and Mae (née Cooney) Angell. He received a bachelor's degree in English Literature from Providence College. He married Lynn Edwards on August 14, 1971. Soon after Angell entered the U.S. Army upon graduation and served at the Pentagon until 1972. He then moved to Boston and worked as a methods analyst at an engineering company and later at an insurance firm in Rhode Island. His brother, the Most Rev. Kenneth Angell, was a Roman Catholic prelate and Bishop of Burlington, Vermont. Angell moved to Los Angeles in 1977. His first script was sold to the producers of the "Annie Flynn" series. Five years later, he sold his second script, "Archie Bunker's Place". In 1983, he joined "Cheers" as a staff writer. In 1985, Angell joined forces with Peter Casey and David Lee as "Cheers" supervising producers/writers. The trio received 37 Emmy Award nominations and won 24 Emmy Awards, including the above-mentioned for "Frasier". They also won an Outstanding Comedy Series Emmy for "Cheers", in 1989, which Angell, Casey, Lee and the series' other producers shared, and an Outstanding Writing/Comedy Emmy for "Cheers", which Angell received in 1984. After working together as producers on "Cheers", Angell, Casey and Lee formed Grub Street Productions. In 1990, they created and executive-produced the comedy series "Wings". Angell and his wife, Lynn, were among the passengers of American Airlines Flight 11 killed in the September 11 attacks on the World Trade Center in New York City in 2001. The American Screenwriters Association awards the annual David Angell Humanitarian Award to any individual in the entertainment industry who contributes to global well-being through donations of time, expertise or other support to improve the human condition. In 2004, The Angell Foundation of Los Angeles, California, awarded Providence College a gift of $2 million for the Smith Center for the Arts. The two-part episode of "Frasier" to air after the attacks, "Don Juan in Hell" airing on September 25, 2001, ended with the memorial tribute, "In loving memory of our friends Lynn and David Angell". "Goodnight, Seattle", series finale which aired May 13, 2004, featured the birth of Niles Crane and Daphne Moon’s son, who is named David in tribute. At the National 9/11 Memorial, Angell and his wife are memorialized at the North Pool, on Panel N-1, along with other passengers from Flight 11.
https://en.wikipedia.org/wiki?curid=8436
Diedrich Hermann Westermann Diedrich Hermann Westermann (June 24, 1875–May 31, 1956) was a German missionary, Africanist, and linguist. He substantially extended and revised the work of Carl Meinhof, his teacher, although he rejected some of Meinhof's theories only implicitly. Westermann is seen as one of the founders of modern African linguistics. He carried out extensive linguistic and anthropological research in the area ranging from Senegal eastwards to the Upper Nile. His linguistic publications cover a wide range of African languages, including the Gbe languages, Nuer, Kpelle, Shilluk, Hausa, and Guang. Westermann's comparative work, begun in 1911, initially brought together much of today's Niger–Congo and Nilo-Saharan language phyla under the name Sudanic languages. His most important later publication "Die westlichen Sudansprachen" 1927a divided these into East and West Sudanic languages and laid the basis for what would become Niger–Congo. In this book and a series of associated articles between 1925 and 1928, Westermann both identified a large number of roots that form the basis of our understanding of Niger–Congo and set out the evidence for the coherence of many of the families that constitute it. Much of the classification of African languages associated with Joseph Greenberg actually derives from the work of Westermann. In 1927 Westermann published a "Practical Orthography of African Languages", which became later known as the "Westermann script". Subsequently, he published the influential and oft-reprinted "Practical Phonetics for Students of African Languages" in collaboration with Ida C. Ward (1933). He was born in Baden near Bremen and also died there.
https://en.wikipedia.org/wiki?curid=8437
Diacritic A diacritic (also diacritical mark, diacritical point, diacritical sign, or accent) is a glyph added to a letter or basic glyph. The term derives from the Ancient Greek (, "distinguishing"), from (, "to distinguish"). "Diacritic" is primarily an adjective, though sometimes used as a noun, whereas "diacritical" is only ever an adjective. Some diacritical marks, such as the acute ( ´ ) and grave ( ` ), are often called "accents". Diacritical marks may appear above or below a letter, or in some other position such as within the letter or between two letters. The main use of diacritical marks in the Latin script is to change the sound-values of the letters to which they are added. Examples are the diaereses in the borrowed French words and , which show that the vowel with the diaeresis mark is pronounced separately from the preceding vowel; the acute and grave accents, which can indicate that a final vowel is to be pronounced, as in "saké" and poetic "breathèd"; and the cedilla under the "c" in the borrowed French word , which shows it is pronounced rather than . In other Latin-script alphabets, they may distinguish between homonyms, such as the French ("there") versus ("the") that are both pronounced . In Gaelic type, a dot over a consonant indicates lenition of the consonant in question. In other alphabetic systems, diacritical marks may perform other functions. Vowel pointing systems, namely the Arabic harakat (  etc.) and the Hebrew niqqud (  etc.) systems, indicate vowels that are not conveyed by the basic alphabet. The Indic virama ( ् etc.) and the Arabic sukūn (  ) mark the absence of vowels. Cantillation marks indicate prosody. Other uses include the Early Cyrillic titlo stroke ( ◌҃ ) and the Hebrew gershayim (  ), which, respectively, mark abbreviations or acronyms, and Greek diacritical marks, which showed that letters of the alphabet were being used as numerals. In the Hanyu Pinyin official romanization system for Chinese, diacritics are used to mark the tones of the syllables in which the marked vowels occur. In orthography and collation, a letter modified by a diacritic may be treated either as a new, distinct letter or as a letter–diacritic combination. This varies from language to language, and may vary from case to case within a language. English is the only major modern European language requiring no diacritics for native words (although a diaeresis may be used in words such as "coöperation"). In some cases, letters are used as "in-line diacritics", with the same function as ancillary glyphs, in that they modify the sound of the letter preceding them, as in the case of the "h" in the English pronunciation of "sh" and "th". Among the types of diacritic used in alphabets based on the Latin script are: The tilde, dot, comma, titlo, apostrophe, bar, and colon are sometimes diacritical marks, but also have other uses. Not all diacritics occur adjacent to the letter they modify. In the Wali language of Ghana, for example, an apostrophe indicates a change of vowel quality, but occurs at the beginning of the word, as in the dialects "’Bulengee" and "’Dolimi". Because of vowel harmony, all vowels in a word are affected, so the scope of the diacritic is the entire word. In abugida scripts, like those used to write Hindi and Thai, diacritics indicate vowels, and may occur above, below, before, after, or around the consonant letter they modify. The tittle (dot) on the letter "i" or the letter "j", of the Latin alphabet originated as a diacritic to clearly distinguish "i" from the minims (downstrokes) of adjacent letters. It first appeared in the 11th century in the sequence "ii" (as in ), then spread to "i" adjacent to "m, n, u", and finally to all lowercase "i"s. The "j", originally a variant of "i", inherited the tittle. The shape of the diacritic developed from initially resembling today's acute accent to a long flourish by the 15th century. With the advent of Roman type it was reduced to the round dot we have today. Languages from Eastern Europe tend to use diacritics on both consonants and vowels, whereas in Western Europe digraphs are more typically used to change consonant sounds. Most languages in Western Europe use diacritics on vowels, aside from English where there are typically none (with some exceptions). These diacritics are used in addition to the acute, grave, and circumflex accents and the diaeresis: The diacritics >〮 and 〯 , known as Bangjeom (), were used to mark pitch accents in Hangul for Middle Korean. They were written to the left of a syllable in vertical writing and above a syllable in horizontal writing. The South Korean government officially revised the romanization of the Korean language in July 2000 to eliminate diacritics. In addition to the above vowel marks, transliteration of Syriac sometimes includes "ə", "e̊" or superscript "e" (or often nothing at all) to represent an original Aramaic schwa that became lost later on at some point in the development of Syriac. Some transliteration schemes find its inclusion necessary for showing spirantization or for historical reasons. Some non-alphabetic scripts also employ symbols that function essentially as diacritics. Different languages use different rules to put diacritic characters in alphabetical order. French and Portuguese treat letters with diacritical marks the same as the underlying letter for purposes of ordering and dictionaries. The Scandinavian languages and the Finnish language, by contrast, treat the characters with diacritics "å", "ä", and "ö" as distinct letters of the alphabet, and sort them after "z". Usually "ä" (a-umlaut) and "ö" (o-umlaut) [used in Swedish and Finnish] are sorted as equivalent to "æ" (ash) and "ø" (o-slash) [used in Danish and Norwegian]. Also, "aa", when used as an alternative spelling to "å", is sorted as such. Other letters modified by diacritics are treated as variants of the underlying letter, with the exception that "ü" is frequently sorted as "y". Languages that treat accented letters as variants of the underlying letter usually alphabetize words with such symbols immediately after similar unmarked words. For instance, in German where two words differ only by an umlaut, the word without it is sorted first in German dictionaries (e.g. "schon" and then "schön", or "fallen" and then "fällen"). However, when names are concerned (e.g. in phone books or in author catalogues in libraries), umlauts are often treated as combinations of the vowel with a suffixed "e"; Austrian phone books now treat characters with umlauts as separate letters (immediately following the underlying vowel). In Spanish, the grapheme "ñ" is considered a new letter different from "n" and collated between "n" and "o", as it denotes a different sound from that of a plain "n". But the accented vowels "á", "é", "í", "ó", "ú" are not separated from the unaccented vowels "a", "e", "i", "o", "u", as the acute accent in Spanish only modifies stress within the word or denotes a distinction between homonyms, and does not modify the sound of a letter. For a comprehensive list of the collating orders in various languages, see Collating sequence. Modern computer technology was developed mostly in English-speaking countries, so data formats, keyboard layouts, etc. were developed with a bias favoring English, a language with an alphabet without diacritical marks. Efforts have been made to create internationalized domain names that further extend the English alphabet (e.g., "pokémon.com"). Depending on the keyboard layout, which differs amongst countries, it is more or less easy to enter letters with diacritics on computers and typewriters. Some have their own keys; some are created by first pressing the key with the diacritic mark followed by the letter to place it on. Such a key is sometimes referred to as a dead key, as it produces no output of its own but modifies the output of the key pressed after it. In modern Microsoft Windows and Linux operating systems, the keyboard layouts "US International" and "UK International" feature dead keys that allow one to type Latin letters with the acute, grave, circumflex, diaeresis, tilde, and cedilla found in Western European languages (specifically, those combinations found in the ISO Latin-1 character set) directly: + gives "ë", + gives "õ", etc. On Apple Macintosh computers, there are keyboard shortcuts for the most common diacritics; followed by a vowel places an acute accent, followed by a vowel gives an umlaut, gives a cedilla, etc. Diacritics can be composed in most X Window System keyboard layouts, as well as other operating systems, such as Microsoft Windows, using additional software. On computers, the availability of code pages determines whether one can use certain diacritics. Unicode solves this problem by assigning every known character its own code; if this code is known, most modern computer systems provide a method to input it. With Unicode, it is also possible to combine diacritical marks with most characters. However, as of 2019, very few fonts include the necessary support to correctly render character-plus-diacritic(s) for the Latin, Cyrillic and some other alphabets (exceptions include Andika). The following languages have letters that contain diacritics that are considered independent letters distinct from those without diacritics. English is one of the few European languages that does not have many words that contain diacritical marks. Instead, digraphs are the main way the Modern English alphabet adapts the Latin to its phonemes. Exceptions are unassimilated foreign loanwords, including borrowings from French and, increasingly, Spanish like jalapeño; however, the diacritic is also sometimes omitted from such words. Loanwords that frequently appear with the diacritic in English include "café", "résumé" or "resumé" (a usage that helps distinguish it from the verb "resume"), "soufflé", and "naïveté" (see "English terms with diacritical marks"). In older practice (and even among some orthographically conservative modern writers) one may see examples such as "élite", "mêlée" and "rôle." English speakers and writers once used the diaeresis more often than now in words such as "coöperation" (from Fr. "coopération"), "zoölogy" (from Grk. "zoologia"), and "seeër" (now more commonly "see-er "or simply" seer") as a way of indicating that adjacent vowels belonged to separate syllables, but this practice has become far less common. "The New Yorker" magazine is a major publication that continues to use the diaresis in place of a hyphen for clarity and economy of space. A few English words, out of context, can only be distinguished from others by a diacritic or modified letter, including exposé, lamé, maté, öre, øre, pâté, and rosé'. The same is true of "résumé," alternately "" but nevertheless it is regularly spelled "resume". In a few words, diacritics that did not exist in the original have been added for disambiguation, as in maté (from Sp. and Port. "mate"), saké (the standard Romanization of the Japanese has no accent mark), and Malé (from Dhivehi މާލެ), to clearly distinguish them from the English words "mate", "sake", and "male". The acute and grave accents are occasionally used in poetry and lyrics: the acute to indicate stress overtly where it might be ambiguous ("rébel" vs. "rebél") or nonstandard for metrical reasons ("caléndar"), the grave to indicate that an ordinarily silent or elided syllable is pronounced ("warnèd," "parlìament"). In certain personal names such as "Renée" and "Zoë", often two spellings exist, and the preference will be known only to those close to the person themselves. Even when the name of a person is spelled with a diacritic, like Charlotte Brontë, this may be dropped in English language articles and even official documents such as passports either due to carelessness, the typist not knowing how to enter letters with diacritical marks, or for technical reasons—California, for example, does not allow names with diacritics, as the computer system cannot process such characters. They also appear in some worldwide company names and/or trademarks such as Nestlé or Citroën. The following languages have letter-diacritic combinations that are not considered independent letters. Several languages that are not written with the Roman alphabet are transliterated, or romanized, using diacritics. Examples: Possibly the greatest number of combining diacritics "required" to compose a valid character in any Unicode language is 8, for the "well-known grapheme cluster in Tibetan and Ranjana scripts",ཧྐྵྨླྺྼྻྂ, or HAKṢHMALAWARAYAṀ. It is U+0F67 U+0F90 U+0FB5 U+0FA8 U+0FB3 U+0FBA U+0FBC U+0FBB U+0F82, or: TIBETAN LETTER HA + TIBETAN SUBJOINED LETTER KA + TIBETAN SUBJOINED LETTER SSA + TIBETAN SUBJOINED LETTER MA + TIBETAN SUBJOINED LETTER LA + TIBETAN SUBJOINED LETTER FIXED-FORM WA + TIBETAN SUBJOINED LETTER FIXED-FORM RA + TIBETAN SUBJOINED LETTER FIXED-FORM YA + TIBETAN SIGN NYI ZLA NAA DA. Some users have explored the limits of rendering in web browsers and other software by "decorating" words with multiple nonsensical diacritics per character. The result is called "Zalgo text". The composed bogus characters and words can be copied and pasted normally via the system clipboard. Example: c̳̻͚̻̩̻͉̯̄̏͑̋͆̎͐ͬ͑͌́͢h̵͔͈͍͇̪̯͇̞͖͇̜͉̪̪̤̙ͧͣ̓̐̓ͤ͋͒ͥ͑̆͒̓͋̑́͞ǎ̡̮̤̤̬͚̝͙̞͎̇ͧ͆͊ͅo̴̲̺͓̖͖͉̜̟̗̮̳͉̻͉̫̯̫̍̋̿̒͌̃̂͊̏̈̏̿ͧ́ͬ̌ͥ̇̓̀͢͜s̵̵̘̹̜̝̘̺̙̻̠̱͚̤͓͚̠͙̝͕͆̿̽ͥ̃͠͡
https://en.wikipedia.org/wiki?curid=8439
Didgeridoo The didgeridoo (; also spelt didjeridu) is a wind instrument. The didgeridoo was developed by Aboriginal peoples of northern Australia, likely within the last 1,000 years, and is now in use around the world. The name for the Yolngu instrument is the yiḏaki (yidaki), or more recently by some, mandapul; in west Arnhem Land it is known as a mago. A didgeridoo is usually cylindrical or conical, and can measure anywhere from long. Most are around long. Generally, the longer the instrument, the lower its pitch or key. However, flared instruments play a higher pitch than unflared instruments of the same length. There are no reliable sources of the exact age of the didgeridoo. Archaeological studies suggest that people of the Kakadu region in Northern Australia have been using the didgeridoo for less than 1,000 years, based on the dating of rock art paintings. A clear rock painting in Ginga Wardelirrhmeng, on the northern edge of the Arnhem Land plateau, from the freshwater period (that had begun 1500 years ago) shows a didgeridoo player and two songmen participating in an Ubarr ceremony. It is thus thought that it was developed by Aboriginal peoples of northern Australia, possibly in Arnhem Land. T.B. Wilson's "Narrative of a Voyage Round the World" (1835) includes a drawing of an Aboriginal man from Raffles Bay on the Cobourg Peninsula (about east of Darwin) playing the instrument. Others observed such an instrument in the same area, made of bamboo and about long. In 1893, English palaeontologist Robert Etheridge, Junior observed the use of "three very curious trumpets" made of bamboo in northern Australia. There were then two native species of bamboo growing along the Adelaide River, Northern Territory". According to A.P. Elkin, in 1938 the instrument was "only known in eastern Kimberley and the northern third of the Northern Territory. The name "didgeridoo" is not of Aboriginal Australian origin and is considered to be an onomatopoetic word. The earliest occurrences of the word in print include a 1908 edition of the "Hamilton Spectator" referring to a "'did-gery-do' (hollow bamboo)", a 1914 edition of "The Northern Territory Times and Gazette", and a 1919 issue of "Smith's Weekly" where it was referred to as a "didjerry" which produced the sound – (phonic) "didjerry, didjerry, didjerry and so on ad infinitum". A rival explanation, that didgeridoo is a corruption of the Irish Gaelic language phrase "dúdaire dubh" or "dúidire dúth", is controversial. "Dúdaire"/"dúidire" is a noun that, depending on the context, may mean "trumpeter", "hummer", "crooner" or "puffer" while "dubh" means "black" and "dúth" means "native". There are numerous names for the instrument among the Aboriginal peoples of northern Australia, none of which closely resemble the word "didgeridoo" (see below). Some didgeridoo enthusiasts, scholars and Aboriginal people advocate using local language names for the instrument. "Yiḏaki" (transcribed "yidaki" in English, sometimes spelt "yirdaki") is one of the most commonly used names although, strictly speaking, it refers to a specific type of the instrument made and used by the Yolngu peoples of north-east Arnhem Land. Some Yolngu people began using the word "mandapul" after 2011, out of respect for the passing of a Manggalili man who had a name sounding similar to yidaki. In west Arnhem Land, it is known as a "mago", a name popularised by virtuoso player David Blanasi, a Bininj man, whose language was Kunwinjku, and who brought the didgeridoo to world prominence. However the mago is slightly different from the Yiḏaki: usually shorter, and sounding somewhat different – a slightly fuller and richer sound, but without the "overtone" note. There are at least 45 names for the didgeridoo, several of which suggest its original construction of bamboo, such as "bambu", "bombo", "kambu", and "pampu", which are still used in the "lingua franca" by some Aboriginal people. The following are some of the more common regional names. A didgeridoo is usually cylindrical or conical, and can measure anywhere from long. Most are around long. Generally, the longer the instrument, the lower its pitch or key. However, flared instruments play a higher pitch than unflared instruments of the same length. The didgeridoo is classified as a wind instrument and is similar in form to a straight trumpet, but made of wood. It has also been called a dronepipe. Traditional didgeridoos are usually made from hardwoods, especially the various eucalyptus species that are endemic to northern and central Australia. Generally the main trunk of the tree is harvested, though a substantial branch may be used instead. Traditional didgeridoo makers seek suitably hollow live trees in areas with obvious termite activity. Termites attack these living eucalyptus trees, removing only the dead heartwood of the tree, as the living sapwood contains a chemical that repels the insects. Various techniques are employed to find trees with a suitable hollow, including knowledge of landscape and termite activity patterns, and a kind of tap or knock test, in which the bark of the tree is peeled back, and a fingernail or the blunt end of a tool, such as an axe, is knocked against the wood to determine if the hollow produces the right resonance. Once a suitably hollow tree is found, it is cut down and cleaned out, the bark is taken off, the ends trimmed, and the exterior is shaped; this results in a finished instrument. A rim of beeswax may be applied to the mouthpiece end. Non-traditional didgeridoos can be made from native or non-native hard woods (typically split, hollowed and rejoined), glass, fibreglass, metal, agave, clay, hemp (in the form of a bioplastic named zelfo), PVC piping and carbon fibre. These typically have an upper inside diameter of around 1.25" down to a bell end of anywhere between two and eight inches and have a length corresponding to the desired key. The end of the pipe can be shaped and smoothed to create a comfortable mouthpiece or an added mouthpiece can be made of any shaped and smoothed material such as rubber, rubber stopper with a hole or beeswax. Modern didgeridoo designs are distinct from the traditional Australian Aboriginal didgeridoo, and are innovations recognised by musicologists. Didgeridoo design innovation started in the late 20th century, using non-traditional materials and non-traditional shapes. The practice has sparked, however, a good deal of debate (aesthetic, ethic, and legal) among indigenous practitioners and non-indigenous people. Didgeridoos can be painted by their maker or a dedicated artist using traditional or modern paints while others retain the natural wood grain design with minimal or no decoration. The didgeridoo is played with continuously vibrating lips to produce the drone while using a special breathing technique called circular breathing. This requires breathing in through the nose whilst simultaneously expelling stored air out of the mouth using the tongue and cheeks. By use of this technique, a skilled player can replenish the air in their lungs, and with practice can sustain a note for as long as desired. Recordings exist of modern didgeridoo players playing continuously for more than 40 minutes; Mark Atkins on "Didgeridoo Concerto" (1994) plays for over 50 minutes continuously. The didgeridoo functions "...as an aural kaleidoscope of timbres" and that "the extremely difficult virtuoso techniques developed by expert performers find no parallel elsewhere." A termite-bored didgeridoo has an irregular shape that, overall, usually increases in diameter towards the lower end. This shape means that its resonances occur at frequencies that are not harmonically spaced in frequency. This contrasts with the harmonic spacing of the resonances in a cylindrical plastic pipe, whose resonant frequencies fall in the ratio 1:3:5 etc. The second resonance of a didgeridoo (the note sounded by overblowing) is usually around an 11th higher than the fundamental frequency (a frequency ratio of 8:3). The vibration produced by the player's lips has harmonics, i.e., it has frequency components falling exactly in the ratio 1:2:3 etc. However, the non-harmonic spacing of the instrument's resonances means that the harmonics of the fundamental note are not systematically assisted by instrument resonances, as is usually the case for Western wind instruments (e.g., in the low range of the clarinet, the 1st, 3rd, and 5th harmonics of the reed are assisted by resonances of the bore). Sufficiently strong resonances of the vocal tract can strongly influence the timbre of the instrument. At some frequencies, whose values depend on the position of the player's tongue, resonances of the vocal tract inhibit the oscillatory flow of air into the instrument. Bands of frequencies that are not thus inhibited produce formants in the output sound. These formants, and especially their variation during the inhalation and exhalation phases of circular breathing, give the instrument its readily recognizable sound. Other variations in the didgeridoo's sound can be made by adding vocalizations to the drone. Most of the vocalizations are related to sounds emitted by Australian animals, such as the dingo or the kookaburra. To produce these sounds, the players simply have to use their vocal folds to produce the sounds of the animals whilst continuing to blow air through the instrument. The results range from very high-pitched sounds to much lower sounds involving interference between the lip and vocal fold vibrations. Adding vocalizations increases the complexity of the playing. Modern performances using the didgeridoo include combining it with beatboxing. It was featured on the British children's TV series "Blue Peter". The didgeridoo also became a role playing instrument in the experimental and avant-garde music scene. Industrial music bands like Test Department generated sounds from this instrument and used them in their industrial performances. It is very often used in the music project Naakhum which combines Extreme Metal and Ethnic music. Early songs by the acid jazz band Jamiroquai featured didgeridoo player Wallis Buchanan (until he left the band in 1999). A notable song featuring a didgeridoo is the band's first single "When You Gonna Learn", which features prominent didgeridoo playing in both the introduction and solo sections. The instrument is commonly used by ambient artist Steve Roach as a complement to his produced soundscapes, in both live and recorded formats. It features prominently in his collaborative work "" (with Australian Aboriginal artist David Hudson and cellist Sarah Hopkins) as well as "Dreamtime Return". It is used in the Indian song "Jaane Kyon" from the film "Dil Chahta Hai". Chris Brooks, lead singer of the New Zealand hard rock band Like a Storm uses the didgeridoo in some of the band's songs including "Love the Way You Hate Me" from their album "". Kate Bush made extensive use of the didgeridoo (played by Australian musician Rolf Harris) on her album "The Dreaming", which was written and recorded after a holiday in Australia. Charlie McMahon, who formed the group Gondwanaland, was one of the first non-Aboriginal players to gain fame as a professional didgeridoo player. He has toured internationally with Midnight Oil. He invented the didjeribone, a sliding didgeridoo made from two lengths of plastic tubing; its playing style is somewhat in the manner of a trombone, hence the portmanteau name. Traditionally, the didgeridoo was played as an accompaniment to ceremonial dancing and singing and for solo or recreational purposes. For Aboriginal peoples of northern Australia, the yidaki is still used to accompany singers and dancers in cultural ceremonies. For the Yolngu people, the yidaki is part of their whole physical and cultural landscape and environment, comprising the people and spirit beings which belong to their country, kinship system and the Yolngu Matha language. It is connected to Yolngu Law and underpinned by ceremony, in song, dance, visual art and stories. Pair sticks, sometimes called clapsticks ("bilma" or "bimla" by some traditional groups), establish the beat for the songs during ceremonies. The rhythm of the didgeridoo and the beat of the clapsticks are precise, and these patterns have been handed down for many generations. In the Wangga genre, the song-man starts with vocals and then introduces "bilma" to the accompaniment of didgeridoo. Traditionally, only men play the didgeridoo and sing during ceremonial occasions and playing by females is sometimes discouraged by Aboriginal communities and elders. In 2008, publisher Harper Collins apologized for its book "The Daring Book for Girls", which openly encouraged girls to play the instrument after some Aboriginal academics described such encouragement as "extreme cultural insensitivity" and "an extreme faux pas... part of a general ignorance that mainstream Australia has about Aboriginal culture." However, Linda Barwick, an ethnomusicologist, says that though traditionally women have not played the didgeridoo in ceremony, in informal situations there is no prohibition in the Dreaming Law. For example, Jemima Wimalu, a Mara woman from the Roper River is very proficient at playing the didgeridoo and is featured on the record "Aboriginal Sound Instruments" released in 1978. In 1995, musicologist Steve Knopoff observed Yirrkala women performing "djatpangarri" songs that are traditionally performed by men and in 1996, ethnomusicologist Elizabeth MacKinley reported women of the Yanyuwa group giving public performances. While there is no prohibition in the area of the didgeridoo's origin, such restrictions have been applied by other Indigenous communities. The didgeridoo was introduced to the Kimberleys almost a century ago but it is only in the last decade that Aboriginal men have shown adverse reactions to women playing the instrument and prohibitions are especially evident in the South East of Australia. The belief that women are prohibited from playing is widespread among non-Aboriginal people and is also common among Aboriginal communities in Southern Australia; some ethnomusicologists believe that the dissemination of the taboo belief and other misconceptions is a result of commercial agendas and marketing. The majority of commercial didgeridoo recordings available are distributed by multinational recording companies and feature non-Aboriginal people playing a New Age style of music with liner notes promoting the instrument's spirituality which misleads consumers about the didgeridoo's secular role in traditional Aboriginal culture. The taboo is particularly strong among many Aboriginal groups in the South East of Australia, where it is forbidden and considered "cultural theft" for non-Aboriginal women, and especially performers of New Age music regardless of gender, to play or even touch a didgeridoo. A 2005 study reported in the "British Medical Journal" found that learning and practising the didgeridoo helped reduce snoring and obstructive sleep apnea by strengthening muscles in the upper airway, thus reducing their tendency to collapse during sleep. In the study, intervention subjects were trained in and practiced didgeridoo playing, including circular breathing and other techniques. Control subjects were asked not to play the instrument. Subjects were surveyed before and after the study period to assess the effects of intervention. A small 2010 study noted improvements in the asthma management of Aboriginal teens when incorporating didgeridoo playing.
https://en.wikipedia.org/wiki?curid=8443
Developmental biology Developmental biology is the study of the process by which animals and plants grow and develop. Developmental biology also encompasses the biology of regeneration, asexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism. In the late 20th century, the discipline largely transformed into evolutionary developmental biology. The main processes involved in the embryonic development of animals are: regional specification, morphogenesis, cell differentiation, growth, and the overall control of timing explored in evolutionary developmental biology: The development of plants involves similar processes to that of animals. However plant cells are mostly immotile so morphogenesis is achieved by differential growth, without cell movements. Also, the inductive signals and the genes involved are different from those that control animal development. Cell differentiation is the process whereby different functional cell types arise in development. For example, neurons, muscle fibers and hepatocytes (liver cells) are well known types of differentiated cells. Differentiated cells usually produce large amounts of a few proteins that are required for their specific function and this gives them the characteristic appearance that enables them to be recognized under the light microscope. The genes encoding these proteins are highly active. Typically their chromatin structure is very open, allowing access for the transcription enzymes, and specific transcription factors bind to regulatory sequences in the DNA in order to activate gene expression. For example, NeuroD is a key transcription factor for neuronal differentiation, myogenin for muscle differentiation, and HNF4 for hepatocyte differentiation. Cell differentiation is usually the final stage of development, preceded by several states of commitment which are not visibly differentiated. A single tissue, formed from a single type of progenitor cell or stem cell, often consists of several differentiated cell types. Control of their formation involves a process of lateral inhibition, based on the properties of the Notch signaling pathway. For example, in the neural plate of the embryo this system operates to generate a population of neuronal precursor cells in which NeuroD is highly expressed. Regeneration indicates the ability to regrow a missing part. This is very prevalent amongst plants, which show continuous growth, and also among colonial animals such as hydroids and ascidians. But most interest by developmental biologists has been shown in the regeneration of parts in free living animals. In particular four models have been the subject of much investigation. Two of these have the ability to regenerate whole bodies: "Hydra", which can regenerate any part of the polyp from a small fragment, and planarian worms, which can usually regenerate both heads and tails. Both of these examples have continuous cell turnover fed by stem cells and, at least in planaria, at least some of the stem cells have been shown to be pluripotent. The other two models show only distal regeneration of appendages. These are the insect appendages, usually the legs of hemimetabolous insects such as the cricket, and the limbs of urodele amphibians. Considerable information is now available about amphibian limb regeneration and it is known that each cell type regenerates itself, except for connective tissues where there is considerable interconversion between cartilage, dermis and tendons. In terms of the pattern of structures, this is controlled by a re-activation of signals active in the embryo. There is still debate about the old question of whether regeneration is a "pristine" or an "adaptive" property. If the former is the case, with improved knowledge, we might expect to be able to improve regenerative ability in humans. If the latter, then each instance of regeneration is presumed to have arisen by natural selection in circumstances particular to the species, so no general rules would be expected. The sperm and egg fuse in the process of fertilization to form a fertilized egg, or zygote. This undergoes a period of divisions to form a ball or sheet of similar cells called a blastula or blastoderm. These cell divisions are usually rapid with no growth so the daughter cells are half the size of the mother cell and the whole embryo stays about the same size. They are called cleavage divisions. Mouse epiblast primordial germ cells (see Figure: “The initial stages of human embryogenesis”) undergo extensive epigenetic reprogramming. This process involves genome-wide DNA demethylation, chromatin reorganization and epigenetic imprint erasure leading to totipotency. DNA demethylation is carried out by a process that utilizes the DNA base excision repair pathway. Morphogenetic movements convert the cell mass into a three layered structure consisting of multicellular sheets called ectoderm, mesoderm and endoderm. These sheets are known as germ layers. This is the process of gastrulation. During cleavage and gastrulation the first regional specification events occur. In addition to the formation of the three germ layers themselves, these often generate extraembryonic structures, such as the mammalian placenta, needed for support and nutrition of the embryo, and also establish differences of commitment along the anteroposterior axis (head, trunk and tail). Regional specification is initiated by the presence of cytoplasmic determinants in one part of the zygote. The cells that contain the determinant become a signaling center and emit an inducing factor. Because the inducing factor is produced in one place, diffuses away, and decays, it forms a concentration gradient, high near the source cells and low further away. The remaining cells of the embryo, which do not contain the determinant, are competent to respond to different concentrations by upregulating specific developmental control genes. This results in a series of zones becoming set up, arranged at progressively greater distance from the signaling center. In each zone a different combination of developmental control genes is upregulated. These genes encode transcription factors which upregulate new combinations of gene activity in each region. Among other functions, these transcription factors control expression of genes conferring specific adhesive and motility properties on the cells in which they are active. Because of these different morphogenetic properties, the cells of each germ layer move to form sheets such that the ectoderm ends up on the outside, mesoderm in the middle, and endoderm on the inside. Morphogenetic movements not only change the shape and structure of the embryo, but by bringing cell sheets into new spatial relationships they also make possible new phases of signaling and response between them. Growth in embryos is mostly autonomous. For each territory of cells the growth rate is controlled by the combination of genes that are active. Free-living embryos do not grow in mass as they have no external food supply. But embryos fed by a placenta or extraembryonic yolk supply can grow very fast, and changes to relative growth rate between parts in these organisms help to produce the final overall anatomy. The whole process needs to be coordinated in time and how this is controlled is not understood. There may be a master clock able to communicate with all parts of the embryo that controls the course of events, or timing may depend simply on local causal sequences of events. Developmental processes are very evident during the process of metamorphosis. This occurs in various types of animal. Well-known examples are seen in frogs, which usually hatch as a tadpole and metamorphoses to an adult frog, and certain insects which hatch as a larva and then become remodeled to the adult form during a pupal stage. All the developmental processes listed above occur during metamorphosis. Examples that have been especially well studied include tail loss and other changes in the tadpole of the frog "Xenopus", and the biology of the imaginal discs, which generate the adult body parts of the fly "Drosophila melanogaster". Plant development is the process by which structures originate and mature as a plant grows. It is studied in plant anatomy and plant physiology as well as plant morphology. Plants constantly produce new tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. The properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin its life. Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialized tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium. In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism). Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin. Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility. Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants. Most land plants share a common ancestor, multicellular algae. An example of the evolution of plant morphology is seen in charophytes. Studies have shown that charophytes have traits that are homologous to land plants. There are two main theories of the evolution of plant morphology, these theories are the homologous theory and the antithetic theory. The commonly accepted theory for the evolution of plant morphology is the antithetic theory. The antithetic theory states that the multiple mitotic divisions that take place before meiosis, cause the development of the sporophyte. Then the sporophyte will development as an independent organism. Much of developmental biology research in recent decades has focused on the use of a small number of model organisms. It has turned out that there is much conservation of developmental mechanisms across the animal kingdom. In early development different vertebrate species all use essentially the same inductive signals and the same genes encoding regional identity. Even invertebrates use a similar repertoire of signals and genes although the body parts formed are significantly different. Model organisms each have some particular experimental advantages which have enabled them to become popular among researchers. In one sense they are "models" for the whole animal kingdom, and in another sense they are "models" for human development, which is difficult to study directly for both ethical and practical reasons. Model organisms have been most useful for elucidating the broad nature of developmental mechanisms. The more detail is sought, the more they differ from each other and from humans. Plants: Vertebrates: Invertebrates: Also popular for some purposes have been sea urchins and ascidians. For studies of regeneration urodele amphibians such as the axolotl "Ambystoma mexicanum" are used, and also planarian worms such as "Schmidtea mediterranea". Organoids have also been demonstrated as an efficient model for development. Plant development has focused on the thale cress "Arabidopsis thaliana" as a model organism.
https://en.wikipedia.org/wiki?curid=8449
Double planet In astronomy, a double planet (also binary planet) is a binary system where both objects are of planetary mass. The term is not recognized by the International Astronomical Union (IAU) and is therefore not an official classification. At its 2006 General Assembly, the International Astronomical Union considered a proposal that Pluto and Charon be reclassified as a double planet, but the proposal was abandoned in favor of the current definition of planet. In promotional materials advertising the SMART-1 mission and pre-dating the IAU planet definition, the European Space Agency once referred to the Earth–Moon system as a double planet. Some binary asteroids with components of roughly equal mass are sometimes informally referred to as double minor planets. These include binary asteroids 69230 Hermes and 90 Antiope and binary Kuiper belt objects (KBOs) 79360 Sila–Nunam and . There is debate as to what criteria should be used to distinguish "double planet" from a "planet–moon system". The following are considerations. A definition proposed in the Astronomical Journal calls for both bodies to individually satisfy an orbit-clearing criterion in order to be called a double planet. One important consideration in defining "double planet" is the ratio of the masses of the two bodies. A mass ratio of 1 would indicate bodies of equal mass, and bodies with mass ratios closer to 1 are more attractive to label as "doubles". Using this definition, the satellites of Mars, Jupiter, Saturn, Uranus, and Neptune can all easily be excluded; they all have masses less than 0.00025 () of the planets around which they revolve. Some dwarf planets, too, have satellites substantially less massive than the dwarf planets themselves. The most notable exception is the Pluto–Charon system. The Charon-to-Pluto mass ratio of 0.117 (≈ ) is close enough to 1 that Pluto and Charon have frequently been described by many scientists as "double dwarf planets" ("double planets" prior to the 2006 definition of "planet"). The International Astronomical Union (IAU) currently calls Charon a satellite of Pluto, but has explicitly expressed a willingness to reconsider the bodies double dwarf planets at a future time. The Moon-to-Earth mass ratio of 0.01230 (≈ ) is also notably close to 1 when compared to all other satellite-to-planet ratios. Consequently, some scientists view the Earth-Moon system as a double planet as well, though this is a minority view. Eris's lone satellite, Dysnomia, has a radius somewhere around that of Eris; assuming similar densities (Dysnomia's compositional make-up may or may not differ substantially from Eris's), the mass ratio would be near , a value intermediate to the Moon–Earth and Charon–Pluto ratios. The next criteria both attempt to answer the question "How close to 1 must the mass ratio be?" Currently, the most commonly proposed definition for a double-planet system is one in which the barycenter, around which both bodies orbit, lies outside both bodies. Under this definition, Pluto and Charon are double dwarf planets, since they orbit a point clearly outside of Pluto, as visible in animations created from images of the "New Horizons" space probe in June 2015. Under this definition, the Earth–Moon system is not currently a double planet; although the Moon is massive enough to cause the Earth to make a noticeable revolution around this center of mass, this point nevertheless lies well within Earth. However, the Moon currently migrates outward from Earth at a rate of approximately per year; in a few billion years, the Earth–Moon system's center of mass will lie outside Earth, which would make it a double-planet system. The center of mass of the Jupiter–Sun system lies outside the surface of the Sun, though arguing that Jupiter and the Sun are a double star is "not" analogous to arguing Pluto-Charon is a double dwarf planet. The problem is that Jupiter is not a star, or even a brown dwarf, and due to its low mass it is unable to achieve any form of fusion. Isaac Asimov suggested a distinction between planet–moon and double-planet structures based in part on what he called a "tug-of-war" value, which does not consider their relative sizes. This quantity is simply the ratio of the force exerted on the smaller body by the larger (primary) body to the force exerted on the smaller body by the Sun. This can be shown to equal where is the mass of the primary (the larger body), is the mass of the Sun, is the distance between the smaller body and the Sun, and is the distance between the smaller body and the primary. The tug-of-war value does not rely on the mass of the satellite (the smaller body). This formula actually reflects the relation of the gravitational effects on the smaller body from the larger body and from the Sun. The tug-of-war figure for Saturn's moon Titan is 380, which means that Saturn's hold on Titan is 380 times as strong as the Sun's hold on Titan. Titan's tug-of-war value may be compared with that of Saturn's moon Phoebe, which has a tug-of-war value of just 3.5. So Saturn's hold on Phoebe is only 3.5 times as strong as the Sun's hold on Phoebe. Asimov calculated tug-of-war values for several satellites of the planets. He showed that even the largest gas giant, Jupiter, had only a slightly better hold than the Sun on its outer captured satellites, some with tug-of-war values not much higher than one. In nearly every one of Asimov's calculations the tug-of-war value was found to be greater than one, so in those cases the Sun loses the tug-of-war with the planets. The one exception was Earth's Moon, where the Sun wins the tug-of-war with a value of 0.46, which means that Earth's hold on the Moon is less than half that of the Sun's. Asimov included this with his other arguments that Earth and the Moon should be considered a binary planet. See the Path of Earth and Moon around Sun section in the "Orbit of the Moon" article for a more detailed explanation. This definition of double planet depends on the pair's distance from the Sun. If the Earth–Moon system happened to orbit farther away from the Sun than it does now, then Earth would win the tug of war. For example, at the orbit of Mars, the Moon's tug-of-war value would be 1.05. Also, several tiny moons discovered since Asimov's proposal would qualify as double planets by this argument. Neptune's small outer moons Neso and Psamathe, for example, have tug-of-war values of 0.42 and 0.44, less than that of Earth's Moon. Yet their masses are tiny compared to Neptune's, with an estimated ratio of 1.5 () and 0.4 (). A final consideration is the way in which the two bodies came to form a system. Both the Earth-Moon and Pluto-Charon systems are thought to have been formed as a result of giant impacts: one body was impacted by a second body, resulting in a debris disk, and through accretion, either two new bodies formed or one new body formed, with the larger body remaining (but changed). However, a giant impact is not a sufficient condition for two bodies being "double planets" because such impacts can also produce tiny satellites, such as the four small outer satellites of Pluto. A now-abandoned hypothesis for the origin of the Moon was actually called the "double-planet hypothesis"; the idea was that the Earth and the Moon formed in the same region of the Solar System's proto-planetary disk, forming a system under gravitational interaction. This idea, too, is a problematic condition for defining two bodies as "double planets" because planets can "capture" moons through gravitational interaction. For example, the moons of Mars (Phobos and Deimos) are thought to be asteroids captured long ago by Mars. Such a definition would also deem Neptune-Triton a double planet, since Triton was a Kuiper belt body the same size and of similar composition to Pluto, later captured by Neptune. Informational notes Citations Bibliography Further reading
https://en.wikipedia.org/wiki?curid=8454
Denaturation (biochemistry) Denaturation is a process in which proteins or nucleic acids lose the quaternary structure, tertiary structure, and secondary structure which is present in their native state, by application of some external stress or compound such as a strong acid or base, a concentrated inorganic salt, an organic solvent (e.g., alcohol or chloroform), radiation or heat. If proteins in a living cell are denatured, this results in disruption of cell activity and possibly cell death. Protein denaturation is also a consequence of cell death. Denatured proteins can exhibit a wide range of characteristics, from conformational change and loss of solubility to aggregation due to the exposure of hydrophobic groups. Denatured proteins lose their 3D structure and therefore cannot function. Protein folding is key to whether a globular or membrane protein can do its job correctly; it must be folded into the right shape to function. However, hydrogen bonds, which play a big part in folding, are rather weak and thus easily affected by heat, acidity, varying salt concentrations, and other stressors which can denature the protein. This is one reason why homeostasis is physiologically necessary in many life forms. This concept is unrelated to denatured alcohol, which is alcohol that has been mixed with additives to make it unsuitable for human consumption. When food is cooked, some of its proteins become denatured. This is why boiled eggs become hard and cooked meat becomes firm. A classic example of denaturing in proteins comes from egg whites, which are typically largely egg albumins in water. Fresh from the eggs, egg whites are transparent and liquid. Cooking the thermally unstable whites turns them opaque, forming an interconnected solid mass. The same transformation can be effected with a denaturing chemical. Pouring egg whites into a beaker of acetone will also turn egg whites translucent and solid. The skin that forms on curdled milk is another common example of denatured protein. The cold appetizer known as ceviche is prepared by chemically "cooking" raw fish and shellfish in an acidic citrus marinade, without heat. Denatured proteins can exhibit a wide range of characteristics, from loss of solubility to protein aggregation. Proteins or Polypeptides are polymers of amino acids. A protein is created by ribosomes that "read" RNA that is encoded by codons in the gene and assemble the requisite amino acid combination from the genetic instruction, in a process known as translation. The newly created protein strand then undergoes posttranslational modification, in which additional atoms or molecules are added, for example copper, zinc, or iron. Once this post-translational modification process has been completed, the protein begins to fold (sometimes spontaneously and sometimes with enzymatic assistance), curling up on itself so that hydrophobic elements of the protein are buried deep inside the structure and hydrophilic elements end up on the outside. The final shape of a protein determines how it interacts with its environment. Protein folding consists of a balance between a substantial amount of weak intra-molecular interactions within a protein (Hydrophobic, electrostatic, and Van Der Waals Interactions) and protein-solvent interactions. As a result, this process is heavily reliant on environmental state that the protein resides in. These environmental conditions include, and are not limited to, temperature, salinity, pressure, and the solvents that happen to be involved. Consequently, any exposure to extreme stresses (e.g. heat or radiation, high inorganic salt concentrations, strong acids and bases) can disrupt a protein's interaction and inevitably lead to denaturation. When a protein is denatured, secondary and tertiary structures are altered but the peptide bonds of the primary structure between the amino acids are left intact. Since all structural levels of the protein determine its function, the protein can no longer perform its function once it has been denatured. This is in contrast to intrinsically unstructured proteins, which are unfolded in their native state, but still functionally active and tend to fold upon binding to their biological target. Most biological substrates lose their biological function when denatured. For example, enzymes lose their activity, because the substrates can no longer bind to the active site, and because amino acid residues involved in stabilizing substrates' transition states are no longer positioned to be able to do so. The denaturing process and the associated loss of activity can be measured using techniques such as dual-polarization interferometry, CD, QCM-D and MP-SPR. By targeting proteins, heavy metals have been known to disrupt the function and activity carried out by proteins. It is important to note that heavy metals fall into categories consisting of transition metals as well as a select amount of metalloid. These metals, when interacting with native, folded proteins, tend to play a role in obstructing their biological activity. This interference can be carried out in a different number of ways. These heavy metals can form a complex with the functional side chain groups present in a protein or form bonds to free thiols. Heavy metals also play a role in oxidizing amino acid side chains present in protein. Along with this, when interacting with metalloproteins, heavy metals can dislocate and replace key metal ions. As a result, heavy metals can interfere with folded proteins, which can strongly deter protein stability and activity. In many cases, denaturation is reversible (the proteins can regain their native state when the denaturing influence is removed). This process can be called renaturation. This understanding has led to the notion that all the information needed for proteins to assume their native state was encoded in the primary structure of the protein, and hence in the DNA that codes for the protein, the so-called "Anfinsen's thermodynamic hypothesis". Denaturation can also be irreversible. This irreversibility is typically a kinetic, not thermodynamic irreversibility, as generally when a protein is folded it has lower free energy. Through kinetic irreversibility, the fact that the protein is stuck in a local minimum can stop it from ever refolding after it has been irreversibly denatured. Denaturation can also be caused by changes in the pH which can affect the chemistry of the amino acids and their residues. The ionizable groups in amino acids are able to become ionized when changes in pH occur. A pH change to more acidic or more basic conditions can induce unfolding. Acid-induced unfolding often occurs between pH 2 and 5, base-induced unfolding usually requires pH 10 or higher. Nucleic acids (including RNA and DNA) are nucleotide polymers synthesized by polymerase enzymes during either transcription or DNA replication. Following 5'-3' synthesis of the backbone, individual nitrogenous bases are capable of interacting with one another via hydrogen bonding, thus allowing for the formation of higher-order structures. Nucleic acid denaturation occurs when hydrogen bonding between nucleotides is disrupted, and results in the separation of previously annealed strands. For example, denaturation of DNA due to high temperatures results in the disruption of base pairs and the separation of the double stranded helix into two single strands. Nucleic acid strands are capable of re-annealling when "normal" conditions are restored, but if restoration occurs too quickly, the nucleic acid strands may re-anneal imperfectly resulting in the improper pairing of bases. The non-covalent interactions between antiparallel strands in DNA can be broken in order to "open" the double helix when biologically important mechanisms such as DNA replication, transcription, DNA repair or protein binding are set to occur. The area of partially separated DNA is known as the denaturation bubble, which can be more specifically defined as the opening of a DNA double helix through the coordinated separation of base pairs. The first model that attempted to describe the thermodynamics of the denaturation bubble was introduced in 1966 and called the Poland-Scheraga Model. This model describes the denaturation of DNA strands as a function of temperature. As the temperature increases, the hydrogen bonds between the Watson and Crick base pairs are increasingly disturbed and "denatured loops" begin to form. However, the Poland-Scheraga Model is now considered elementary because it fails to account for the confounding implications of DNA sequence, chemical composition, stiffness and torsion. Recent thermodynamic studies have inferred that the lifetime of a singular denaturation bubble ranges from 1 microsecond to 1 millisecond. This information is based on established timescales of DNA replication and transcription. Currently, biophysical and biochemical research studies are being performed to more fully elucidate the thermodynamic details of the denaturation bubble. With polymerase chain reaction (PCR) being among the most popular contexts in which DNA denaturation is desired, heating is the most frequent method of denaturation. Other than denaturation by heat, nucleic acids can undergo the denaturation process through various chemical agents such as formamide, guanidine, sodium salicylate, dimethyl sulfoxide (DMSO), propylene glycol, and urea. These chemical denaturing agents lower the melting temperature (Tm) by competing for hydrogen bond donors and acceptors with pre-existing nitrogenous base pairs. Some agents are even able to induce denaturation at room temperature. For example, alkaline agents (e.g. NaOH) have been shown to denature DNA by changing pH and removing hydrogen-bond contributing protons. These denaturants have been employed to make Denaturing Gradient Gel Electrophoresis gel (DGGE), which promotes denaturation of nucleic acids in order to eliminate the influence of nucleic acid shape on their electrophoretic mobility. The optical activity (absorption and scattering of light) and hydrodynamic properties (translational diffusion, sedimentation coefficients, and rotational correlation times) of formamide denatured nucleic acids are similar to those of heat-denatured nucleic acids. Therefore, depending on the desired effect, chemically denaturing DNA can provide a gentler procedure for denaturing nucleic acids than denaturation induced by heat. Studies comparing different denaturation methods such as heating, beads mill of different bead sizes, probe sonification, and chemical denaturation show that chemical denaturation can provide quicker denaturation compared to the other physical denaturation methods described. Particularly in cases where rapid renaturation is desired, chemical denaturation agents can provide an ideal alternative to heating. For example, DNA strands denatured with alkaline agents such as NaOH renature as soon as phosphate buffer is added. Small, electronegative molecules such as nitrogen and oxygen, which are the primary gases in air, significantly impact the ability of surrounding molecules to participate in hydrogen bonding. These molecules compete with surrounding hydrogen bond acceptors for hydrogen bond donors, therefore acting as "hydrogen bond breakers" and weakening interactions between surrounding molecules in the environment. Antiparellel strands in DNA double helices are non-covalently bound by hydrogen bonding between Watson and Crick base pairs; nitrogen and oxygen therefore maintain the potential to weaken the integrity of DNA when exposed to air. As a result, DNA strands exposed to air require less force to separate and exemplify lower melting temperatures. Many laboratory techniques rely on the ability of nucleic acid strands to separate. By understanding the properties of nucleic acid denaturation, the following methods were created: Acidic protein denaturants include: Bases work similarly to acids in denaturation. They include: Most organic solvents are denaturing, including: Cross-linking agents for proteins include: Chaotropic agents include: Agents that break disulfide bonds by reduction include: Agents such as Hydrogen Peroxide, Elemental Chlorine, Hypochlorous Acid(Chlorine Water),Bromine, Bromine Water, Iodine,Nitric & Oxidising Acids, Ozone react with sensitive moieties such as sulfide/Thiol, activated aromatic rings (phenylalanine) in effect damage the protein and render it useless. Acidic nucleic acid denaturants include: Basic nucleic acid denaturants include: Other nucleic acid denaturants include:
https://en.wikipedia.org/wiki?curid=8456
Dwight L. Moody Dwight Lyman Moody (February 5, 1837 – December 22, 1899), also known as D. L. Moody, was an American evangelist and publisher connected with the Holiness Movement, who founded the Moody Church, Northfield School and Mount Hermon School in Massachusetts (now Northfield Mount Hermon School), Moody Bible Institute and Moody Publishers. One of his most famous quotes was “Faith makes all things possible... Love makes all things easy.“ Moody gave up his lucrative boot and shoe business to devote his life to revivalism, working first in the Civil War with union troops through YMCA in the United States Christian commission. In Chicago, he built one of the major evangelical centers in the nation, which is still active. Working with singer Ira Sankey, he toured the country and the British Isles, drawing large crowds with a dynamic speaking style that preached God's love and friendship, kindness and forgiveness rather than hellfire and condemnation. Dwight Moody was born in Northfield, Massachusetts, as the seventh child in a large family. His father, Edwin J. Moody (1800–1841), was a small farmer and stonemason. His mother was Betsey Moody (née Holton; 1805–1896). They had five sons and a daughter before Dwight's birth. His father died when Dwight was age four; fraternal twins, a boy and a girl, were born one month after the father's death. Their mother struggled to support the nine children, but had to send some off to work for their room and board. Dwight too was sent off, where he received cornmeal, porridge, and milk three times a day. He complained to his mother, but when she learned that he was getting all he wanted to eat, she sent him back. During this time, she continued to send the children to church. Together with his eight siblings, Dwight was raised in the Unitarian church. His oldest brother ran away and was not heard from by the family until many years later. When Moody turned 17, he moved to Boston to work (after receiving many job rejections locally) in an uncle's shoe store. One of the uncle's requirements was that Moody attend the Congregational Church of Mount Vernon, where Dr. Edward Norris Kirk served as the pastor. In April 1855 Moody was converted to evangelical Christianity when his Sunday school teacher, Edward Kimball, talked to him about how much God loved him. His conversion sparked the start of his career as an evangelist. Moody was not received by the church when he first applied in May 1855. He was not received as a church member until May 4, 1856. According to Moody's memoir, his teacher, Edward Kimball, said: D. L. Moody "could not conscientiously enlist" in the Union Army during the Civil War, later describing himself as "a Quaker" in this respect. After the Civil War started, he became involved with the United States Christian Commission of YMCA. He paid nine visits to the battlefront, being present among the Union soldiers after the Battle of Shiloh (a.k.a. Pittsburg Landing) and the Battle of Stones River; he also entered Richmond, Virginia, with the troops of General Grant. On August 28, 1862, Moody married Emma C. Revell, with whom he had a daughter, Emma Reynolds Moody, and two sons, William Revell Moody and Paul Dwight Moody. The growing Sunday School congregation needed a permanent home, so Moody started a church in Chicago, the Illinois Street Church. In June 1871 at an International Sunday School Convention in Indianapolis, Indiana, Dwight Moody met Ira D. Sankey. He was a gospel singer, with whom Moody soon began to cooperate and collaborate. Four months later, in October 1871, the Great Chicago Fire destroyed Moody's church building, as well as his house and those of most of his congregation. Many had to flee the flames, saving only their lives, and ending up completely destitute. Moody, reporting on the disaster, said about his own situation that: "...he saved nothing but his reputation and his Bible." In the years after the fire, Moody's wealthy Chicago patron John V. Farwell tried to persuade him to make his permanent home in the city, offering to build a new house for Moody and his family. But the newly famous Moody, also sought by supporters in New York, Philadelphia, and elsewhere, chose a tranquil farm he had purchased near his birthplace in Northfield, Massachusetts. He felt he could better recover in a rural setting from his lengthy preaching trips. Northfield became an important location in evangelical Christian history in the late 19th century as Moody organized summer conferences. These were led and attended by prominent Christian preachers and evangelists from around the world. Western Massachusetts has had a rich evangelical tradition including Jonathan Edwards preaching in colonial Northampton and C.I. Scofield preaching in Northfield. A protégé of Moody founded Moores Corner Church, in Leverett, Massachusetts, and it continues to be evangelical. Moody founded two schools here: Northfield School for Girls, founded in 1879, and the Mount Hermon School for Boys, founded in 1881. In the late 20th century, these merged, forming today's co-educational, nondenominational Northfield Mount Hermon School. During a trip to the United Kingdom in the spring of 1872, Moody became well known as an evangelist. Literary works published by the Moody Bible Institute claim that he was the greatest evangelist of the 19th century. He preached almost a hundred times and came into communion with the Plymouth Brethren. On several occasions, he filled stadia of a capacity of 2,000 to 4,000. According to his memoir, in the Botanic Gardens Palace, he attracted an audience estimated at between 15,000 and 30,000. That turnout continued throughout 1874 and 1875, with crowds of thousands at all of his meetings. During his visit to Scotland, Moody was helped and encouraged by Andrew A. Bonar. The famous London Baptist preacher, Charles Spurgeon, invited him to speak, and he promoted the American as well. When Moody returned to the US, he was said to frequently attract crowds of 12,000 to 20,000 were as common as they had been in England. President Grant and some of his cabinet officials attended a Moody meeting on January 19, 1876. He held evangelistic meetings from Boston to New York, throughout New England, and as far west as San Francisco, also visiting other West Coast towns from Vancouver, British Columbia, Canada to San Diego. Moody aided the work of cross-cultural evangelism by promoting "The Wordless Book," a teaching tool developed in 1866 by Charles Spurgeon. In 1875, Moody added a fourth color to the design of the three-color evangelistic device: gold — to "represent heaven." This "book" has been and is still used to teach uncounted thousands of illiterate people, young and old, around the globe about the gospel message. Moody visited Britain with Ira D. Sankey, with Moody preaching and Sankey singing at meetings. Together they published books of Christian hymns. In 1883 they visited Edinburgh and raised £10,000 for the building of a new home for the Carrubbers Close Mission. Moody later preached at the laying of the foundation stone for what is now called the Carrubbers Christian Centre, one of the few buildings on the Royal Mile which continues to be used for its original purpose. Moody greatly influenced the cause of cross-cultural Christian missions after he met Hudson Taylor, a pioneer missionary to China. He actively supported the China Inland Mission and encouraged many of his congregation to volunteer for service overseas. His influence was felt among Swedes. Being of English heritage, never visiting Sweden or any other Scandinavian country, and never speaking a word of Swedish, nonetheless he became a hero revivalist among Swedish Mission Friends in Sweden and America. News of Moody's large revival campaigns in Great Britain from 1873 through 1875 traveled quickly to Sweden, making "Mr. Moody" a household name in homes of many Mission Friends. Moody's sermons published in Sweden were distributed in books, newspapers, and colporteur tracts, and they led to the spread of Sweden's "Moody fever" from 1875 through 1880. He preached his last sermon on November 16, 1899, in Kansas City, Missouri. Becoming ill, he returned home by train to Northfield. During the preceding several months, friends had observed he had added some to his already ample frame. Although his illness was never diagnosed, it has been speculated that he suffered from congestive heart failure. He died on December 22, 1899, surrounded by his family. Already installed as the leader of his Chicago Bible Institute. R. A. Torrey succeeded Moody as its pastor. Religious historian James Findlay says that:. Ten years after Moody's death the Chicago Avenue Church was renamed the Moody Church in his honor, and the Chicago Bible Institute was likewise renamed the Moody Bible Institute. During World War II the Liberty ship was built in Panama City, Florida, and named in his honor.
https://en.wikipedia.org/wiki?curid=8459
Dubnium Dubnium is a synthetic chemical element with the symbol Db and atomic number 105. Dubnium is highly radioactive: the most stable known isotope, dubnium-268, has a half-life of about 28 hours. This greatly limits the extent of research on dubnium. Dubnium does not occur naturally on Earth and is produced artificially. The Soviet Joint Institute for Nuclear Research (JINR) claimed the first discovery of the element in 1968, followed by the American Lawrence Berkeley Laboratory in 1970. Both teams proposed their names for the new element and used them without formal approval. The long-standing dispute was resolved in 1993 by an official investigation of the discovery claims by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry and the International Union of Pure and Applied Physics, resulting in credit for the discovery being officially shared between both teams. The element was formally named "dubnium" in 1997 after the town of Dubna, the site of the JINR. Theoretical research establishes dubnium as a member of group 5 in the 6d series of transition metals, placing it under vanadium, niobium, and tantalum. Dubnium should share most properties, such as its valence electron configuration and having a dominant +5 oxidation state, with the other group 5 elements, with a few anomalies due to relativistic effects. A limited investigation of dubnium chemistry has confirmed this. Solution chemistry experiments have revealed that dubnium often behaves more like niobium rather than tantalum, breaking periodic trends. Uranium, element 92, is the heaviest element to occur in significant quantities in nature; heavier elements can only be practically produced by synthesis. The first synthesis of a new element—neptunium, element 93—was achieved in 1940 by a team of researchers in the United States. In the following years, American scientists synthesized the elements up to mendelevium, element 101, which was synthesized in 1955. From element 102, the priority of discoveries was contested between American and Soviet physicists. Their rivalry resulted in a race for new elements and credit for their discoveries, later named the Transfermium Wars. The first report of the discovery of element 105 came from the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, in April 1968. The scientists bombarded 243Am with a beam of 22Ne ions, and reported 9.4 MeV (with a half-life of 0.1–3 seconds) and 9.7 MeV ("t"1/2 > 0.05 s) alpha activities followed by alpha activities similar to those of either 256103 or 257103. Based on prior theoretical predictions, the two activity lines were assigned to 261105 and 260105, respectively. After observing the alpha decays of element 105, the researchers aimed to observe spontaneous fission (SF) of the element and study the resulting fission fragments. They published a paper in February 1970, reporting multiple examples of two such activities, with half-lives of 14 ms and . They assigned the former activity to 242mfAm and ascribed the latter activity to an isotope of element 105. They suggested that it was unlikely that this activity could come from a transfer reaction instead of element 105, because the yield ratio for this reaction was significantly lower than that of the 242mfAm-producing transfer reaction, in accordance with theoretical predictions. To establish that this activity was not from a (22Ne,"x"n) reaction, the researchers bombarded a 243Am target with 18O ions; reactions producing 256103 and 257103 showed very little SF activity (matching the established data), and the reaction producing heavier 258103 and 259103 produced no SF activity at all, in line with theoretical data. The researchers concluded that the activities observed came from SF of element 105. In April 1970, a team at Lawrence Berkeley Laboratory (LBL), in Berkeley, California, United States, claimed to have synthesized element 105 by bombarding californium-249 with nitrogen-15 ions, with an alpha activity of 9.1 MeV. To ensure this activity was not from a different reaction, the team attempted other reactions: bombarding 249Cf with 14N, Pb with 15N, and Hg with 15N. They stated no such activity was found in those reactions. The characteristics of the daughter nuclei matched those of 256103, implying that the parent nuclei were of 260105. These results did not confirm the JINR findings regarding the 9.4 MeV or 9.7 MeV alpha decay of 260105, leaving only 261105 as a possibly produced isotope. JINR then attempted another experiment to create element 105, published in a report in May 1970. They claimed that they had synthesized more nuclei of element 105 and that the experiment confirmed their previous work. According to the paper, the isotope produced by JINR was probably 261105, or possibly 260105. This report included an initial chemical examination: the thermal gradient version of the gas-chromatography method was applied to demonstrate that the chloride of what had formed from the SF activity nearly matched that of niobium pentachloride, rather than hafnium tetrachloride. The team identified a 2.2-second SF activity in a volatile chloride portraying eka-tantalum properties, and inferred that the source of the SF activity must have been element 105. In June 1970, JINR made improvements on their first experiment, using a purer target and reducing the intensity of transfer reactions by installing a collimator before the catcher. This time, they were able to find 9.1 MeV alpha activities with daughter isotopes identifiable as either 256103 or 257103, implying that the original isotope was either 260105 or 261105. JINR did not propose a name after their first report claiming synthesis of element 105, which would have been the usual practice. This led LBL to believe that JINR did not have enough experimental data to back their claim. After collecting more data, JINR proposed the name "nielsbohrium" (Ns) in honor of the Danish nuclear physicist Niels Bohr, a founder of the theories of atomic structure and quantum theory. When LBL first announced their synthesis of element 105, they proposed that the new element be named "hahnium" (Ha) after the German chemist Otto Hahn, the "father of nuclear chemistry", thus creating an element naming controversy. In the early 1970s, both teams reported synthesis of the next element, element 106, but did not suggest names. JINR suggested establishing an international committee to clarify the discovery criteria. This proposal was accepted in 1974 and a neutral joint group formed. Neither team showed interest in resolving the conflict through a third party, so the leading scientists of LBL—Albert Ghiorso and Glenn Seaborg—traveled to Dubna in 1975 and met with the leading scientists of JINR—Georgy Flerov, Yuri Oganessian, and others— to try to resolve the conflict internally and render the neutral joint group unnecessary; after two hours of discussions, this failed. The joint neutral group never assembled to assess the claims and the conflict remained unsolved. In 1979, IUPAC suggested systematic element names to be used as placeholders until permanent names were established; under it, element 105 would be "unnilpentium", from the Latin roots "un-" and "nil-" and the Greek root "pent-" (meaning "one", "zero", and "five", respectively, the digits of the atomic number). Both teams ignored it as they did not wish to weaken their outstanding claims. In 1981, the Gesellschaft für Schwerionenforschung (GSI; "Society for Heavy Ion Research") in Darmstadt, Hesse, West Germany, claimed synthesis of element 107; their report came out five years after the first report from JINR but with greater precision, making a more solid claim on discovery. GSI acknowledged JINR's efforts by suggesting the name "nielsbohrium" for the new element. JINR did not suggest a new name for element 105, stating it was more important to determine its discoverers first. In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed a Transfermium Working Group (TWG) to assess discoveries and establish final names for the controversial elements. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria on recognition of an element, and in 1991, they finished the work on assessing discoveries and disbanded. These results were published in 1993. According to the report, the first definitely successful experiment was the April 1970 LBL experiment, closely followed by the June 1970 JINR experiment, so credit for the discovery of the element should be shared between the two teams. LBL said that the input from JINR was overrated in the review. They claimed JINR was only able to unambiguously demonstrate the synthesis of element 105 a year after they did. JINR and GSI endorsed the report. In 1994, IUPAC published a recommendation on naming the disputed elements. For element 105, they proposed "joliotium" (Jl) after the French physicist Frédéric Joliot-Curie, a contributor to the development of nuclear physics and chemistry; this name was originally proposed by the Soviet team for element 102, which by then had long been called nobelium. This recommendation was criticized by the American scientists for several reasons. Firstly, their suggestions were scrambled: the names "rutherfordium" and "hahnium", originally suggested by Berkeley for elements 104 and 105, were respectively reassigned to elements 106 and 108. Secondly, elements 104 and 105 were given names favored by JINR, despite earlier recognition of LBL as an equal co-discoverer for both of them. Thirdly and most importantly, IUPAC rejected the name "seaborgium" for element 106, having just approved a rule that an element could not be named after a living person, even though the 1993 report had given the LBL team the sole credit for its discovery. In 1995, IUPAC abandoned the controversial rule and established a committee of national representatives aimed at finding a compromise. They suggested "seaborgium" for element 106 in exchange for the removal of all the other American proposals, except for the established name "lawrencium" for element 103. The equally entrenched name "nobelium" for element 102 was replaced by "flerovium" after Georgy Flerov, following the recognition by the 1993 report that that element had been first synthesized in Dubna. This was rejected by American scientists and the decision was retracted. The name "flerovium" was later used for element 114. In 1996, IUPAC held another meeting, reconsidered all names in hand, and accepted another set of recommendations; it was approved and published in 1997. Element 105 was named "dubnium" (Db), after Dubna in Russia, the location of the JINR; the American suggestions were used for elements 102, 103, 104, and 106. The name "dubnium" had been used for element 104 in the previous IUPAC recommendation. The American scientists "reluctantly" approved this decision. IUPAC pointed out that the Berkeley laboratory had already been recognized several times, in the naming of berkelium, californium, and americium, and that the acceptance of the names "rutherfordium" and "seaborgium" for elements 104 and 106 should be offset by recognizing JINR's contributions to the discovery of elements 104, 105, and 106. Dubnium, having an atomic number of 105, is a superheavy element; like all elements with such high atomic numbers, it is very unstable. The longest-lasting known isotope of dubnium, 268Db, has a half-life of around a day. No stable isotopes have been seen, and a 2012 calculation by JINR suggested that the half-lives of all dubnium isotopes would not significantly exceed a day. Dubnium can only be obtained by artificial production. The short half-life of dubnium limits experimentation. This is exacerbated by the fact that the most stable isotopes are the hardest to synthesize. Elements with a lower atomic number have stable isotopes with a lower neutron–proton ratio than those with higher atomic number, meaning that the target and beam nuclei that could be employed to create the superheavy element have fewer neutrons than needed to form these most stable isotopes. (Different techniques based on rapid neutron capture and transfer reactions are being considered as of the 2010s, but those based on the collision of a large and small nucleus still dominate research in the area.) Only a few atoms of 268Db can be produced in each experiment, and thus the measured lifetimes vary significantly during the process. During three experiments, 23 atoms were created in total, with a resulting half-life of . The second most stable isotope, 270Db, has been produced in even smaller quantities: three atoms in total, with lifetimes of 33.4 h, 1.3 h, and 1.6 h. These two are the heaviest isotopes of dubnium to date, and both were produced as a result of decay of the heavier nuclei 288Mc and 294Ts rather than directly, because the experiments that yielded them were originally designed in Dubna for 48Ca beams. For its mass, 48Ca has by far the greatest neutron excess of all practically stable nuclei, both quantitative and relative, which correspondingly helps synthesize superheavy nuclei with more neutrons, but this gain is compensated by the decreased likelihood of fusion for high atomic numbers. According to the periodic law, dubnium should belong to group 5, with vanadium, niobium, and tantalum. Several studies have investigated the properties of element 105 and found that they generally agreed with the predictions of the periodic law. Significant deviations may nevertheless occur, due to relativistic effects, which dramatically change physical properties on both atomic and macroscopic scales. These properties have remained challenging to measure for several reasons: the difficulties of production of superheavy atoms, the low rates of production, which only allows for microscopic scales, requirements for a radiochemistry laboratory to test the atoms, short half-lives of those atoms, and the presence of many unwanted activities apart from those of synthesis of superheavy atoms. So far, studies have only been performed on single atoms. A direct relativistic effect is that as the atomic numbers of elements increase, the innermost electrons begin to revolve faster around the nucleus as a result of an increase of electromagnetic attraction between an electron and a nucleus. Similar effects have been found for the outermost s orbitals (and p1/2 ones, though in dubnium they are not occupied): for example, the 7s orbital contracts by 25% in size and is stabilized by 2.6 eV. A more indirect effect is that the contracted s and p1/2 orbitals shield the charge of the nucleus more effectively, leaving less for the outer d and f electrons, which therefore move in larger orbitals. Dubnium is greatly affected by this: unlike the previous group 5 members, its 7s electrons are slightly more difficult to extract than its 6d electrons. Another effect is the spin–orbit interaction, particularly spin–orbit splitting, which splits the 6d subshell—the azimuthal quantum number ℓ of a d shell is 2—into two subshells, with four of the ten orbitals having their ℓ lowered to 3/2 and six raised to 5/2. All ten energy levels are raised; four of them are lower than the other six. (The three 6d electrons normally occupy the lowest energy levels, 6d3/2.) A singly ionized atom of dubnium (Db+) should lose a 6d electron compared to a neutral atom; the doubly (Db2+) or triply (Db3+) ionized atoms of dubnium should eliminate 7s electrons, unlike its lighter homologs. Despite the changes, dubnium is still expected to have five valence electrons; 7p energy levels have not been shown to influence dubnium and its properties. As the 6d orbitals of dubnium are more destabilized than the 5d ones of tantalum, and Db3+ is expected to have two 6d, rather than 7s, electrons remaining, the resulting +3 oxidation state is expected to be unstable and even rarer than that of tantalum. The ionization potential of dubnium in its maximum +5 oxidation state should be slightly lower than that of tantalum and the ionic radius of dubnium should increase compared to tantalum; this has a significant effect on dubnium's chemistry. Atoms of dubnium in the solid state should arrange themselves in a body-centered cubic configuration, like the previous group 5 elements. The predicted density of dubnium is 29 g/cm3. Computational chemistry is simplest in gas-phase chemistry, in which interactions between molecules may be ignored as negligible. Multiple authors have researched dubnium pentachloride; calculations show it to be consistent with the periodic laws by exhibiting the properties of a compound of a group 5 element. For example, the molecular orbital levels indicate that dubnium uses three 6d electron levels as expected. Compared to its tantalum analog, dubnium pentachloride is expected to show increased covalent character: a decrease in the effective charge on an atom and an increase in the overlap population (between orbitals of dubnium and chlorine). Calculations of solution chemistry indicate that the maximum oxidation state of dubnium, +5, will be more stable than those of niobium and tantalum and the +3 and +4 states will be less stable. The tendency towards hydrolysis of cations with the highest oxidation state should continue to decrease within group 5 but is still expected to be quite rapid. Complexation of dubnium is expected to follow group 5 trends in its richness. Calculations for hydroxo-chlorido- complexes have shown a reversal in the trends of complex formation and extraction of group 5 elements, with dubnium being more prone to do so than tantalum. Experimental results of the chemistry of dubnium date back to 1974 and 1976. JINR researchers used a thermochromatographic system and concluded that the volatility of dubnium bromide was less than that of niobium bromide and about the same as that of hafnium bromide. It is not certain that the detected fission products confirmed that the parent was indeed element 105. These results may imply that dubnium behaves more like hafnium than niobium. The next studies on the chemistry of dubnium were conducted in 1988, in Berkeley. They examined whether the most stable oxidation state of dubnium in aqueous solution was +5. Dubnium was fumed twice and washed with concentrated nitric acid; sorption of dubnium on glass cover slips was then compared with that of the group 5 elements niobium and tantalum and the group 4 elements zirconium and hafnium produced under similar conditions. The group 5 elements are known to sorb on glass surfaces; the group 4 elements do not. Dubnium was confirmed as a group 5 member. Surprisingly, the behavior on extraction from mixed nitric and hydrofluoric acid solution into methyl isobutyl ketone differed between dubnium, tantalum, and niobium. Dubnium did not extract and its behavior resembled niobium more closely than tantalum, indicating that complexing behavior could not be predicted purely from simple extrapolations of trends within a group in the periodic table. This prompted further exploration of the chemical behavior of complexes of dubnium. Various labs jointly conducted thousands of repetitive chromatographic experiments between 1988 and 1993. All group 5 elements and protactinium were extracted from concentrated hydrochloric acid; after mixing with lower concentrations of hydrogen chloride, small amounts of hydrogen fluoride were added to start selective re-extraction. Dubnium showed behavior different from that of tantalum but similar to that of niobium and its pseudohomolog protactinium at concentrations of hydrogen chloride below 12 moles per liter. This similarity to the two elements suggested that the formed complex was either or . After extraction experiments of dubnium from hydrogen bromide into diisobutyl carbinol (2,6-dimethylheptan-4-ol), a specific extractant for protactinium, with subsequent elutions with the hydrogen chloride/hydrogen fluoride mix as well as hydrogen chloride, dubnium was found to be less prone to extraction than either protactinium or niobium. This was explained as an increasing tendency to form non‐extractable complexes of multiple negative charges. Further experiments in 1992 confirmed the stability of the +5 state: Db(V) was shown to be extractable from cation‐exchange columns with α‐hydroxyisobutyrate, like the group 5 elements and protactinium; Db(III) and Db(IV) were not. In 1998 and 1999, new predictions suggested that dubnium would extract nearly as well as niobium and better than tantalum from halide solutions, which was later confirmed. The first isothermal gas chromatography experiments were performed in 1992 with 262Db (half-life 35 seconds). The volatilities for niobium and tantalum were similar within error limits, but dubnium appeared to be significantly less volatile. It was postulated that traces of oxygen in the system might have led to formation of , which was predicted to be less volatile than . Later experiments in 1996 showed that group 5 chlorides were more volatile than the corresponding bromides, with the exception of tantalum, presumably due to formation of . Later volatility studies of chlorides of dubnium and niobium as a function of controlled partial pressures of oxygen showed that formation of oxychlorides and general volatility are dependent on concentrations of oxygen. The oxychlorides were shown to be less volatile than the chlorides. In 2004–05, researchers from Dubna and Livermore identified a new dubnium isotope, 268Db, as a fivefold alpha decay product of the newly created element 115. This new isotope proved to be long-lived enough to allow further chemical experimentation, with a half-life of over a day. In the 2004 experiment, a thin layer with dubnium was removed from the surface of the target and dissolved in aqua regia with tracers and a lanthanum carrier, from which various +3, +4, and +5 species were precipitated on adding ammonium hydroxide. The precipitate was washed and dissolved in hydrochloric acid, where it converted to nitrate form and was then dried on a film and counted. Mostly containing a +5 species, which was immediately assigned to dubnium, it also had a +4 species; based on that result, the team decided that additional chemical separation was needed. In 2005, the experiment was repeated, with the final product being hydroxide rather than nitrate precipitate, which was processed further in both Livermore (based on reverse phase chromatography) and Dubna (based on anion exchange chromatography). The +5 species was effectively isolated; dubnium appeared three times in tantalum-only fractions and never in niobium-only fractions. It was noted that these experiments were insufficient to draw conclusions about the general chemical profile of dubnium. In 2009, at the JAEA tandem accelerator in Japan, dubnium was processed in nitric and hydrofluoric acid solution, at concentrations where niobium forms and tantalum forms . Dubnium's behavior was close to that of niobium but not tantalum; it was thus deduced that dubnium formed . From the available information, it was concluded that dubnium often behaved like niobium, sometimes like protactinium, but rarely like tantalum.
https://en.wikipedia.org/wiki?curid=8463
Disaccharide A disaccharide (also called a double sugar or bivose) is the sugar formed when two monosaccharides (simple sugars) are joined by glycosidic linkage. Like monosaccharides, disaccharides are soluble in water. Three common examples are sucrose, lactose, and maltose. Disaccharides are one of the four chemical groupings of carbohydrates (monosaccharides, disaccharides, oligosaccharides, and polysaccharides). The most common types of disaccharides—sucrose, lactose, and maltose—have 12 carbon atoms, with the general formula C12H22O11. The differences in these disaccharides are due to atomic arrangements within the molecule. The joining of simple sugars into a double sugar happens by a condensation reaction, which involves the elimination of a water molecule from the functional groups only. Breaking apart a double sugar into its two simple sugars is accomplished by hydrolysis with the help of a type of enzyme called a disaccharidase. As building the larger sugar ejects a water molecule, breaking it down consumes a water molecule. These reactions are vital in metabolism. Each disaccharide is broken down with the help of a corresponding disaccharidase (sucrase, lactase, and maltase). There are two functionally different classes of disaccharides: The formation of a disaccharide molecule from two monosaccharide molecules proceeds by displacing a hydroxyl radical from one molecule and a hydrogen nucleus (a proton) from the other, so that the now vacant bonds on the monosaccharides join the two monomers together. The vacant bonds on the hydroxyl radical and the proton unite in their turn, forming a molecule of water, that then goes free. Because of the removal of the water molecule from the product, the term of convenience for such a process is "dehydration reaction" (also "condensation reaction" or "dehydration synthesis"). For example, milk sugar (lactose) is a disaccharide made by condensation of one molecule of each of the monosaccharides glucose and galactose, whereas the disaccharide sucrose in sugar cane and sugar beet, is a condensation product of glucose and fructose. Maltose, another common disaccharide, is condensed from two glucose molecules. The dehydration reaction that bonds monosaccharides into disaccharides (and also bonds monosaccharides into more complex polysaccharides) forms what are called glycosidic bonds. The glycosidic bond can be formed between any hydroxyl group on the component monosaccharide. So, even if both component sugars are the same (e.g., glucose), different bond combinations (regiochemistry) and stereochemistry ("alpha-" or "beta-") result in disaccharides that are diastereoisomers with different chemical and physical properties. Depending on the monosaccharide constituents, disaccharides are sometimes crystalline, sometimes water-soluble, and sometimes sweet-tasting and sticky-feeling. Digestion involves breakdown to the monosaccharides. Maltose, cellobiose, and chitobiose are hydrolysis products of the polysaccharides starch, cellulose, and chitin, respectively. Less common disaccharides include:
https://en.wikipedia.org/wiki?curid=8464
Dactylic hexameter Dactylic hexameter (also known as "heroic hexameter" and "the meter of epic") is a form of meter or rhythmic scheme in poetry. It is traditionally associated with the quantitative meter of classical epic poetry in both Greek and Latin and was consequently considered to be "the" grand style of Western classical poetry. Some premier examples of its use are Homer's "Iliad" and "Odyssey", Virgil's "Aeneid", and Ovid's "Metamorphoses". Hexameters also form part of elegiac poetry in both languages, the elegiac couplet being a dactylic hexameter line paired with a dactylic pentameter line. A dactylic hexameter has six (in Greek ἕξ, "hex") feet. In strict dactylic hexameter, each foot would be a dactyl (a long and two short syllables), but classical meter allows for the substitution of a spondee (two long syllables) in place of a dactyl in most positions. Specifically, the first four feet can either be dactyls or spondees more or less freely. The fifth foot is usually a dactyl (around 95% of the time in Homer). The sixth foot can be filled by either a trochee (a long then short syllable) or a spondee. Thus the dactylic line most normally is scanned as follows: Hexameters also have a primary caesura—a break between words, sometimes (but not always) coinciding with a break in sense—at one of several normal positions: After the first syllable of the second foot; after the first syllable in the third foot (the "masculine" caesura); after the second syllable in the third foot if the third foot is a dactyl (the "feminine" caesura); after the first syllable of the fourth foot (the hephthemimeral caesura). Hexameters are frequently enjambed—the meaning runs over from one line to the next, without terminal punctuation—which helps to create the long, flowing narrative of epic. They are generally considered the most grandiose and formal meter. An English-language example of the dactylic hexameter, in quantitative meter: The preceding line follows the rules of Greek and Latin prosody. Syllables containing long vowels, diphthongs and short vowels followed by two or more consonants count as long; all other syllables count as short. Such values may not correspond with the rhythms of ordinary spoken English. The hexameter was first used by early Greek poets of the oral tradition, and the most complete extant examples of their works are the "Iliad" and the "Odyssey", which influenced the authors of all later classical epics that survive today. Early epic poetry was also accompanied by music, and pitch changes associated with the accented Greek must have highlighted the melody, though the exact mechanism is still a topic of discussion. The Homeric poems arrange words so as to create an interplay between the metrical ictus—the first syllable of each foot—and the natural, spoken accent of words. If the ictus and accent coincide too frequently the hexameter becomes "sing-songy". Thus in general, word breaks occur in the middle of metrical feet, while ictus and accent coincide more often near the end of the line. The first line of Homer’s "Iliad"—"Sing, goddess, the anger of Peleus’ son Achilles"—provides an example: Dividing the line into metrical units: Note how the word endings do not coincide with the end of a metrical foot; for the early part of the line this forces the accent of each word to lie in the middle of a foot, playing against the ictus. This line also includes a masculine caesura after , a break that separates the line into two parts. Homer employs a feminine caesura more commonly than later writers: an example occurs in "Iliad" I.5 "...and every bird; thus the plan of Zeus came to fulfillment": Homer’s hexameters contain a higher proportion of dactyls than later hexameter poetry. They are also characterised by a laxer following of verse principles than later epicists almost invariably adhered to. For example, Homer allows spondaic fifth feet (albeit not often), whereas many later authors never do. Homer also altered the forms of words to allow them to fit the hexameter, typically by using a dialectal form: "ptolis" is an epic form used instead of the Attic "polis" as necessary for the meter. Proper names sometimes take forms to fit the meter, for example "Pouludamas" instead of the metrically unviable "Poludamas". Some lines require a knowledge of the digamma for their scansion, e.g. "Iliad" I.108 "you have not yet spoken a good word nor brought one to pass": Here the word was originally in Ionian; the digamma, later lost, lengthened the last syllable of the preceding and removed the apparent defect in the meter. A digamma also saved the hiatus in the third foot. This example demonstrates the oral tradition of the Homeric epics that flourished before they were written down sometime in the 7th century BC. In spite of the occasional exceptions in early epic, most of the later rules of hexameter composition have their origins in the methods and practices of Homer. The hexameter came into Latin as an adaptation from Greek long after the practice of singing the epics had faded. Consequentially, the properties of the meter were learned as specific "rules" rather than as a natural result of musical expression. Also, because the Latin language generally has a higher proportion of long syllables than Greek, it is by nature more spondaic. Thus the Latin hexameter took on characteristics of its own. The earliest example of hexameter in Latin poetry is the "Annales" of Ennius, which established it as the standard for later Latin epics. Later Republican writers, such as Lucretius, Catullus and even Cicero, wrote hexameter compositions, and it was at this time that many of the principles of Latin hexameter were firmly established, and followed by later writers such as Virgil, Ovid, Lucan, and Juvenal. Virgil's opening line for the "Aeneid" is a classic example:: As in Greek, lines were arranged so that the metrically long syllables—those occurring at the beginning of a foot—often avoided the natural stress of a word. In the earlier feet of a line, meter and stress were expected to clash, while in the later feet they were expected to resolve and coincide—an effect that gives each line a natural "dum-ditty-dum-dum" ("shave and a haircut") rhythm to close. Such an arrangement is a balance between an exaggerated emphasis on the metre—which would cause the verse to be sing-songy—and the need to provide some repeated rhythmic guide for skilled recitation. In the following example of Ennius's early Latin hexameter composition, metrical weight (') falls on the first and last syllables of '; the ictus is therefore opposed to the natural stress on the second syllable when the word is pronounced. Similarly, the second syllable of the words ' and ' carry the metrical ictus even though the first is naturally stressed in typical pronunciation. In the closing feet of the line, the natural stress that falls on the third syllable of ' and the second syllable of ' coincide with the metrical ictus and produce the characteristic "shave and a haircut" ending: Like their Greek predecessors, classical Latin poets avoided a large number of word breaks at the ends of foot divisions except between the fourth and fifth, where it was encouraged. In order to preserve the rhythmic close, Latin poets avoided the placement of a single syllable or four-syllable word at the end of a line. The caesura is also handled far more strictly, with Homer's feminine caesura becoming exceedingly rare, and the second-foot caesura always paired with one in the fourth. One example of the evolution of the Latin verse form can be seen in a comparative analysis of the use of spondees in Ennius' time vs. the Augustan age. The repeated use of the heavily spondaic line came to be frowned upon, as well as the use of a high proportion of spondees in both of the first two feet. The following lines of Ennius would not have been felt admissible by later authors since they both contain repeated spondees at the beginning of consecutive lines: However, it is from Virgil that the following famous, heavily spondaic line comes: By the age of Augustus, poets like Virgil closely followed the rules of the meter and approached it in a highly rhetorical way, looking for effects that can be exploited in skilled recitation. For example, the following line from the "Aeneid" (VIII.596) describes the movement of rushing horses and how "a hoof shakes the crumbling field with a galloping sound": This line is made up of five dactyls and a closing spondee, an unusual rhythmic arrangement that imitates the described action. A similar effect is found in VIII.452, where Virgil describes how the blacksmith sons of Vulcan "lift their arms with great strength one to another" in forging Aeneas' shield: The line consists of all spondees except for the usual dactyl in the fifth foot, and is meant to mimic the pounding sound of the work. A third example that mixes the two effects comes from I.42, where Juno pouts that Athena was allowed to use Jove's thunderbolts to destroy Ajax ("she hurled Jove's quick fire from the clouds"): This line is nearly all dactyls except for the spondee at "-lata e". This change in rhythm paired with the harsh elision is intended to emphasize the crash of Athena's thunderbolt. Virgil will occasionally deviate from the strict rules of the meter to produce a special effect. One example from I.105 describing a ship at sea during a storm has Virgil violating metrical standards to place a single-syllable word at the end of the line: The boat "gives its side to the waves; there comes next in a heap a steep mountain of water." By placing the monosyllable "mons" at the end of the line, Virgil interrupts the usual "shave and a haircut" pattern to produce a jarring rhythm, an effect that echoes the crash of a large wave against the side of a ship. The Roman poet Horace uses a similar trick to highlight the comedic irony that "Mountains will be in labor, and bring forth a ridiculous mouse" in this famous line from his "Ars Poetica" (line 139): Another amusing example that comments on the importance of these verse rules comes later in the same poem (line 263): This line, which lacks a proper caesura, is translated "Not every critic sees an inharmonious verse." The verse innovations of the Augustan writers were carefully imitated by their successors in the Silver Age of Latin literature. The verse form itself then was little changed, as the quality of a poet's hexameter was judged against the standard set by Virgil and the other Augustan poets, a respect for literary precedent encompassed by the Latin word '. Deviations were generally regarded as idiosyncrasies or hallmarks of personal style, and were not imitated by later poets. Juvenal, for example, was fond of occasionally creating verses that placed a sense break between the fourth and fifth foot (instead of in the usual caesura positions), but this technique—known as the bucolic diaeresis—did not catch on with other poets. In the late empire, writers experimented again by adding unusual restrictions to the standard hexameter. The rhopalic verse of Ausonius is a good example; besides following the standard hexameter pattern, each word in the line is one syllable longer than the previous, e.g.: Also notable is the tendency among late grammarians to thoroughly dissect the hexameters of Virgil and earlier poets. A treatise on poetry by Diomedes Grammaticus is a good example, as this work (among other things) categorizes dactylic hexameter verses in ways that were later interpreted under the golden line rubric. Independently, these two trends show the form becoming highly artificial—more like a puzzle to solve than a medium for personal poetic expression. By the Middle Ages, some writers adopted more relaxed versions of the meter. Bernard of Cluny, for example, employs it in his "De Contemptu Mundi", but ignores classical conventions in favor of accentual effects and predictable rhyme both within and between verses, e.g.: Not all medieval writers are so at odds with the Virgilian standard, and with the rediscovery of classical literature, later Medieval and Renaissance writers are far more orthodox, but by then the form had become an academic exercise. Petrarch, for example, devoted much time to his "Africa", a dactylic hexameter epic on Scipio Africanus, but this work was unappreciated in his time and remains little read today. In contrast, Dante decided to write his epic, the "Divine Comedy" in Italian—a choice that defied the traditional epic choice of Latin dactylic hexameters—and produced a masterpiece beloved both then and now. With the New Latin period, the language itself came to be regarded as a medium only for "serious" and learned expression, a view that left little room for Latin poetry. The emergence of Recent Latin in the 20th century restored classical orthodoxy among Latinists and sparked a general (if still academic) interest in the beauty of Latin poetry. Today, the modern Latin poets who use the dactylic hexameter are generally as faithful to Virgil as Rome's Silver Age poets. Many poets have attempted to write dactylic hexameters in English, though few works composed in the meter have stood the test of time. Most such works are accentual rather than quantitative. Perhaps the most famous is Longfellow's "Evangeline", whose first line is as follows: Poets who have written quantitative hexameters in English include Robert Bridges. Dactylic hexameter has proved more successful in German than in most modern languages. Friedrich Gottlieb Klopstock's epic "Der Messias" popularized accentual dactylic hexameter in German. Subsequent German poets to employ the form include Goethe (notably in his "Reineke Fuchs") and Schiller. "The Seasons" ("Metai") by Kristijonas Donelaitis is a famous Lithuanian poem in quantitative dactylic hexameters. Because of the nature of Lithuanian, more than half of the lines of the poem are entirely spondaic save for the mandatory dactyl in the fifth foot. Jean-Antoine de Baïf elaborated a system which used original graphemes for regulating French versification by quantity on the Greco-Roman model, a system which came to be known as "vers mesurés", or "vers mesurés à l'antique", which the French language of the Renaissance permitted. In works like his "Étrénes de poézie Franzoęze an vęrs mezurés" (1574) or "Chansonnettes" he used dactylic hexameter, Sapphic stanzas, etc., in quantitative meters.
https://en.wikipedia.org/wiki?curid=8465
Dorado Dorado () is a constellation in the southern sky. It was named in the late 16th century and is now one of the 88 modern constellations. Its name refers to the dolphinfish ("Coryphaena hippurus"), which is known as "dorado" in Portuguese, although it has also been depicted as a swordfish. Dorado contains most of the Large Magellanic Cloud, the remainder being in the constellation Mensa. The South Ecliptic pole also lies within this constellation. Even though the name Dorado is not Latin but Portuguese, astronomers give it the Latin genitive form "Doradus" when naming its stars; it is treated (like the adjacent asterism Argo Navis) as a feminine proper name of Greek origin ending in -ō (like "Io" or "Callisto" or "Argo"), which have a genitive ending "-ūs". Dorado was one of twelve constellations named by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It appeared: Dorado has been represented historically as a dolphinfish and a swordfish; the latter depiction is inaccurate. It has also been represented as a goldfish. The constellation was also known in the 17th and 18th century as Xiphias. The name "Dorado" ultimately become dominant and was adopted by the IAU. Alpha Doradus is a blue-white star of magnitude 3.3, 176 light-years from Earth. It is the brightest star in Dorado. Beta Doradus is a notably bright Cepheid variable star. It is a yellow-tinged supergiant star that has a minimum magnitude of 4.1 and a maximum magnitude of 3.5. One thousand and forty light-years from Earth, Beta Doradus has a period of 9 days and 20 hours. R Doradus is one of the many variable stars in Dorado. S Dor, 9.721 hypergiant in the Large Magellanic Cloud, is the prototype of S Doradus variable stars. The variable star R Doradus 5.73 has the largest known apparent size of any star other than the Sun. Gamma Doradus is the prototype of the Gamma Doradus variable stars. Supernova 1987A was the closest supernova to occur since the invention of the telescope. SNR 0509-67.5 is the remnant of an unusually energetic Type 1a supernova from about 400 years ago. HE 0437-5439 is a hypervelocity star escaping from the Milky Way/Magellanic Cloud system. Dorado is also the location of the South Ecliptic pole, which lies near the fish's head. The pole was called "Polus Doradinalis" by Willem Jansson Blaeu. Because Dorado contains part of the Large Magellanic Cloud, it is rich in deep sky objects. The Large Magellanic Cloud, 25,000 light-years in diameter, is a satellite galaxy of the Milky Way Galaxy, located at a distance of 179,000 light-years. It has been deformed by its gravitational interactions with the larger Milky Way. In 1987, it became host to SN 1987A, the first supernova of 1987 and the closest since 1604. This 25,000-light-year-wide galaxy contains over 10,000 million stars. All coordinates given are for Epoch J2000.0. In Chinese astronomy, the stars of Dorado are in two of Xu Guangqi's Southern Asterisms (近南極星區, "Jìnnánjíxīngōu"): the White Patches Attached (夾白, "Jiābái") and the Goldfish (金魚, "Jīnyú").
https://en.wikipedia.org/wiki?curid=8466
Draco (lawgiver) Draco (; , "Drakōn"; fl. c. 7th century BC), also called Drako or Drakon, was the first recorded legislator of Athens in Ancient Greece. He replaced the prevailing system of oral law and blood feud by a written code to be enforced only by a court of law. Draco was the first democratic legislator, requested by the Athenian citizens to be a lawgiver for the city-state, but the citizens were fully unaware that Draco would establish laws characterized by their harshness. Since the 19th century, the adjective "draconian" (Greek: "δρακόντειος" "drakónteios") refers to similarly unforgiving rules or laws, in Greek, English and other European languages. During the 39th Olympiad, in 622 or 621 BC, Draco established the legal code with which he is identified. Little is known about his life. He may have belonged to the Greek nobility of Attica, with which the 10th-century Suda text records him as contemporaneous, prior to the period of the Seven Sages of Greece. It also relates a folkloric story of his death in the Aeginetan theatre. In a traditional ancient Greek show of approval, his supporters "threw so many hats and shirts and cloaks on his head that he suffocated, and was buried in that same theatre". The truth about his death is still unclear, but it is known that Draco was driven out of Athens by the Athenians to the neighbouring island of Aegina, where he spent the remainder of his life. The laws ( - "thesmoi") that he laid were the first written constitution of Athens. So that no one would be unaware of them, they were posted on wooden tablets ( - "axones"), where they were preserved for almost two centuries on steles of the shape of three-sided pyramids ( - "kyrbeis"). The tablets were called "axones", perhaps because they could be pivoted along the pyramid's axis to read any side. The constitution featured several major innovations: The laws were particularly harsh. For example, any debtor whose status was lower than that of his creditor was forced into slavery. The punishment was more lenient for those owing a debt to a member of a lower class. The death penalty was the punishment for even minor offences, such as stealing a cabbage. Concerning the liberal use of the death penalty in the Draconic code, Plutarch states: "It is said that Drakon himself, when asked why he had fixed the punishment of death for most offences, answered that he considered these lesser crimes to deserve it, and he had no greater punishment for more important ones". All his laws were repealed by Solon in the early 6th century BC, with the exception of the homicide law. After much debate, the Athenians decided to revise the laws, including the homicide law, in 409 BC. The homicide law is a highly fragmented inscription but states that it is up to the victim's relatives to prosecute a killer. According to the preserved part of the inscription, unintentional homicides received a sentence of exile. It is not clear whether Draco's law specified the punishment for intentional homicide. In 409 BC, intentional homicide was punished by death, but Draco's law begins, 'καὶ ἐὰμ μὲ ‘κ [π]ρονοί[α]ς [κ]τ[ένει τίς τινα, φεύγ]ε[ν]', which is ambiguous and difficult to translate. One possible translation offers, "Even if a man not intentionally kills another, he is exiled". Draco introduced the lot-chosen Council of Four Hundred, distinct from the Areopagus, which evolved in later constitutions to play a large role in Athenian democracy. Aristotle notes that Draco, while having the laws written, merely legislated for an existing unwritten Athenian constitution such as setting exact qualifications for eligibility for office. Draco extended the franchise to all free men who could furnish themselves with a set of military equipment. They elected the Council of Four Hundred from among their number; nine archons and the treasurers were drawn from persons possessing an unencumbered property of not less than ten "minas", the generals ("strategoi") and commanders of cavalry ("hipparchoi") from those who could show an unencumbered property of not less than a hundred "minas" and had children born in lawful wedlock over ten years of age. Thus, in the event of their death, their estate could pass to a competent heir. These officers were required to hold to account the "prytanes" (councillors), "strategoi" (generals) and "hipparchoi" (cavalry officers) of the preceding year until their accounts had been audited. "The Council of Areopagus was guardian of the laws, and kept watch over the magistrates to see that they executed their offices in accordance with the laws. Any person who felt himself wronged might lay an information before the Council of Areopagus, on declaring what law was broken by the wrong done to him. But, as has been said before, loans were secured upon the persons of the debtors, and the land was in the hands of a few."
https://en.wikipedia.org/wiki?curid=8467
Determinant In linear algebra, the determinant is a scalar value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix. The determinant of a matrix is denoted , , or . Geometrically, it can be viewed as the volume scaling factor of the linear transformation described by the matrix. This is also the signed volume of the "n"-dimensional parallelepiped spanned by the column or row vectors of the matrix. The determinant is positive or negative according to whether the linear mapping preserves or reverses the orientation of "n"-space. In the case of a matrix the determinant may be defined as Similarly, for a 3 × 3 matrix "A", its determinant is Each determinant of a matrix in this equation is called a minor of the matrix . This procedure can be extended to give a recursive definition for the determinant of an matrix, the "minor expansion formula". Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and the determinant can be used to solve those equations, although other methods of solution are much more computationally efficient. In linear algebra, a matrix (with entries in a field) is singular (not invertible) if and only if its determinant is zero. This leads to the use of determinants in defining the characteristic polynomial of a matrix, whose roots are the eigenvalues. In analytic geometry, determinants express the signed "n"-dimensional volumes of "n"-dimensional parallelepipeds. This leads to the use of determinants in calculus, the Jacobian determinant in the change of variables rule for integrals of functions of several variables. Determinants appear frequently in algebraic identities such as the Vandermonde identity. Determinants possess many algebraic properties. One of them is multiplicativity, namely that the determinant of a product of matrices is equal to the product of determinants. Special types of matrices have special determinants; for example, the determinant of an orthogonal matrix is always plus or minus one, and the determinant of a complex Hermitian matrix is always real. If an real matrix "A" is written in terms of its column vectors formula_3 then This means that formula_5 maps the unit "n"-cube to the "n"-dimensional parallelotope defined by the vectors formula_6 the region formula_7 The determinant gives the signed "n"-dimensional volume of this parallelotope, formula_8 and hence describes more generally the "n"-dimensional volume scaling factor of the linear transformation produced by "A". (The sign shows whether the transformation preserves or reverses orientation.) In particular, if the determinant is zero, then this parallelotope has volume zero and is not fully "n"-dimensional, which indicates that the dimension of the image of "A" is less than "n". This means that "A" produces a linear transformation which is neither onto nor one-to-one, and so is not invertible. There are various equivalent ways to define the determinant of a square matrix "A", i.e. one with the same number of rows and columns. Perhaps the simplest way to express the determinant is by considering the elements in the top row and the respective minors; starting at the left, multiply the element by the minor, then subtract the product of the next element and its minor, and alternate adding and subtracting such products until all elements in the top row have been exhausted. For example, here is the result for a 4 × 4 matrix: Another way to define the determinant is expressed in terms of the columns of the matrix. If we write an matrix "A" in terms of its column vectors where the formula_11 are vectors of size "n", then the determinant of "A" is defined so that where "b" and "c" are scalars, "v" is any vector of size "n" and "I" is the identity matrix of size "n". These equations say that the determinant is a linear function of each column, that interchanging adjacent columns reverses the sign of the determinant, and that the determinant of the identity matrix is 1. These properties mean that the determinant is an alternating multilinear function of the columns that maps the identity matrix to the underlying unit scalar. These suffice to uniquely calculate the determinant of any square matrix. Provided the underlying scalars form a field (more generally, a commutative ring with unity), the definition below shows that such a function exists, and it can be shown to be unique. Equivalently, the determinant can be expressed as a sum of products of entries of the matrix where each product has "n" terms and the coefficient of each product is −1 or 1 or 0 according to a given rule: it is a polynomial expression of the matrix entries. This expression grows rapidly with the size of the matrix (an matrix contributes "n"! terms), so it will first be given explicitly for the case of matrices and matrices, followed by the rule for arbitrary size matrices, which subsumes these two cases. Assume "A" is a square matrix with "n" rows and "n" columns, so that it can be written as The entries can be numbers or expressions (as happens when the determinant is used to define a characteristic polynomial); the definition of the determinant depends only on the fact that they can be added and multiplied together in a commutative manner. The determinant of "A" is denoted by det("A"), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets: The Leibniz formula for the determinant of a matrix is If the matrix entries are real numbers, the matrix can be used to represent two linear maps: one that maps the standard basis vectors to the rows of , and one that maps them to the columns of . In either case, the images of the basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the above matrix is the one with vertices at and as shown in the accompanying diagram. The absolute value of is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by . (The parallelogram formed by the columns of is in general a different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be the same.) The absolute value of the determinant together with the sign becomes the "oriented area" of the parallelogram. The oriented area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix). To show that is the signed area, one may consider a matrix containing two vectors and representing the parallelogram's sides. The signed area can be expressed as for the angle "θ" between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other. Due to the sine this already is the signed area, yet it may be expressed more conveniently using the cosine of the complementary angle to a perpendicular vector, e.g. so that which can be determined by the pattern of the scalar product to be equal to Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by "A". When the determinant is equal to one, the linear mapping defined by the matrix is equi-areal and orientation-preserving. The object known as the "bivector" is related to these ideas. In 2D, it can be interpreted as an "oriented plane segment" formed by imagining two vectors each with origin and coordinates and The bivector magnitude (denoted by is the "signed area", which is also the determinant The Laplace formula for the determinant of a matrix is this can be expanded out to give the Leibniz formula. The Leibniz formula for the determinant of a matrix: The rule of Sarrus is a mnemonic for the matrix determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration: formula_19 formula_20 This scheme for calculating the determinant of a matrix does not carry over into higher dimensions. The determinant of a matrix of arbitrary size can be defined by the Leibniz formula or the Laplace formula. The Leibniz formula for the determinant of an matrix "A" is Here the sum is computed over all permutations "σ" of the set A permutation is a function that reorders this set of integers. The value in the "i"th position after the reordering "σ" is denoted by "σ""i". For example, for , the original sequence 1, 2, 3 might be reordered to , with , , and . The set of all such permutations (also known as the symmetric group on "n" elements) is denoted by S"n". For each permutation "σ", sgn("σ") denotes the signature of "σ", a value that is +1 whenever the reordering given by σ can be achieved by successively interchanging two entries an even number of times, and −1 whenever it can be achieved by an odd number of such interchanges. In any of the formula_22 summands, the term is notation for the product of the entries at positions , where "i" ranges from 1 to "n": For example, the determinant of a matrix "A" () is It is sometimes useful to extend the Leibniz formula to a summation in which not only permutations, but all sequences of "n" indices in the range occur, ensuring that the contribution of a sequence will be zero unless it denotes a permutation. Thus the totally antisymmetric Levi-Civita symbol formula_26 extends the signature of a permutation, by setting formula_27 for any permutation "σ" of "n", and formula_28 when no permutation "σ" exists such that formula_29 for formula_30 (or equivalently, whenever some pair of indices are equal). The determinant for an matrix can then be expressed using an "n"-fold summation as or using two epsilon symbols as where now each "ir" and each "jr" should be summed over . However, through the use of tensor notation and the suppression of the summation symbol (Einstein's summation convention) we can obtain a much more compact expression of the determinant of the second order system of formula_33 dimensions, formula_34; where formula_36 and formula_37 represent 'e-systems' that take on the values 0, +1 and −1 given the number of permutations of formula_38 and formula_39. More specifically, formula_37 is equal to 0 when there is a repeated index in formula_38; +1 when an even number of permutations of formula_38 is present; −1 when an odd number of permutations of formula_38 is present. The number of indices present in the e-systems is equal to formula_44 and thus can be generalized in this manner. The determinant has many properties. Some basic properties of determinants are This can be deduced from some of the properties below, but it follows most easily directly from the Leibniz formula (or from the Laplace expansion), in which the identity permutation is the only one that gives a non-zero contribution. A number of additional properties relate to the effects on the determinant of changing particular rows or columns: Properties 1, 8 and 10 — which all follow from the Leibniz formula — completely characterize the determinant; in other words the determinant is the unique function from matrices to scalars that is "n"-linear alternating in the columns, and takes the value 1 for the identity matrix (this characterization holds even if scalars are taken in any given commutative ring). To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear. For matrices over non-commutative rings, properties 8 and 9 are incompatible for , so there is no good definition of the determinant in this setting. Property 2 above implies that properties for columns have their counterparts in terms of rows: Property 5 says that the determinant on matrices is homogeneous of degree "n". These properties can be used to facilitate the computation of determinants by simplifying the matrix to the point where the determinant can be determined immediately. Specifically, for matrices with coefficients in a field, properties 13 and 14 can be used to transform any matrix into a triangular matrix, whose determinant is given by property 7; this is essentially the method of Gaussian elimination. For example, the determinant of can be computed using the following matrices: Here, "B" is obtained from "A" by adding −1/2×the first row to the second, so that . "C" is obtained from "B" by adding the first to the third row, so that . Finally, "D" is obtained from "C" by exchanging the second and third row, so that . The determinant of the (upper) triangular matrix "D" is the product of its entries on the main diagonal: . Therefore, . The following identity holds for a Schur complement of a square matrix: The Schur complement arises as the result of performing a block Gaussian elimination by multiplying the matrix "M" from the right with a "block lower triangular" matrix Here "I""p" denotes the "p"×"p" identity matrix. After multiplication with the matrix "L", the Schur complement appears in the upper "p"×"p" block. The product matrix is That is, we have effected a Gaussian decomposition The first and last matrices on the RHS have determinant unity, so we have This is Schur's determinant identity. The determinant of a matrix product of square matrices equals the product of their determinants: Thus the determinant is a "multiplicative map". This property is a consequence of the characterization given above of the determinant as the unique "n"-linear alternating function of the columns with value 1 on the identity matrix, since the function that maps can easily be seen to be "n"-linear and alternating in the columns of "M", and takes the value det("A") at the identity. The formula can be generalized to (square) products of rectangular matrices, giving the Cauchy–Binet formula, which also provides an independent proof of the multiplicative property. The determinant det("A") of a matrix "A" is non-zero if and only if "A" is invertible or, yet another equivalent statement, if its rank equals the size of the matrix. If so, the determinant of the inverse matrix is given by In particular, products and inverses of matrices with determinant one still have this property. Thus, the set of such matrices (of fixed size "n") form a group known as the special linear group. More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if "n" is 2 or 3 consists of all rotation matrices), and the special unitary group. Laplace's formula expresses the determinant of a matrix in terms of its minors. The minor "M""i","j" is defined to be the determinant of the -matrix that results from "A" by removing the "i"th row and the "j"th column. The expression is known as a cofactor. The determinant of "A" is given by Calculating det("A") by means of this formula is referred to as expanding the determinant along a row, the "i"th row using the first form with fixed "i", or expanding along a column, using the second form with fixed "j". For example, the Laplace expansion of the matrix along the second column ( and the sum runs over "i") is given by, However, Laplace expansion is efficient for small matrices only. The adjugate matrix adj("A") is the transpose of the matrix consisting of the cofactors, i.e., In terms of the adjugate matrix, Laplace's expansion can be written as Sylvester's determinant theorem states that for "A", an matrix, and "B", an matrix (so that "A" and "B" have dimensions allowing them to be multiplied in either order forming a square matrix): where "I""m" and "I""n" are the and identity matrices, respectively. From this general result several consequences follow. Let be an arbitrary matrix of complex numbers with eigenvalues formula_98. (Here it is understood that an eigenvalue with algebraic multiplicity occurs times in this list.) Then the determinant of is the product of all eigenvalues, The product of all non-zero eigenvalues is referred to as pseudo-determinant. Conversely, determinants can be used to find the eigenvalues of the matrix : they are the solutions of the characteristic equation where "I" is the identity matrix of the same dimension as and is a (scalar) number which solves the equation (there are no more than solutions, where is the dimension of ). A Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this is equivalent to the determinants of the submatrices being positive, for all between 1 and . The trace tr("A") is by definition the sum of the diagonal entries of and also equals the sum of the eigenvalues. Thus, for complex matrices , or, for real matrices , Here exp() denotes the matrix exponential of , because every eigenvalue of corresponds to the eigenvalue exp() of exp(). In particular, given any logarithm of , that is, any matrix satisfying the determinant of is given by For example, for , , and , respectively, cf. Cayley-Hamilton theorem. Such expressions are deducible from combinatorial arguments, Newton's identities, or the Faddeev–LeVerrier algorithm. That is, for generic , the signed constant term of the characteristic polynomial, determined recursively from In the general case, this may also be obtained from where the sum is taken over the set of all integers "kl" ≥ 0 satisfying the equation The formula can be expressed in terms of the complete exponential Bell polynomial of "n" arguments "s""l" = −("l" – 1)! tr("A""l") as This formula can also be used to find the determinant of a matrix with multidimensional indices and . The product and trace of such matrices are defined in a natural way as An important arbitrary dimension identity can be obtained from the Mercator series expansion of the logarithm when the expansion converges. If every eigenvalue of "A" is less than 1 in absolute value, where is the identity matrix. More generally, if is expanded as a formal power series in then all coefficients of for are zero and the remaining polynomial is . For a positive definite matrix , the trace operator gives the following tight lower and upper bounds on the log determinant with equality if and only if . This relationship can be derived via the formula for the KL-divergence between two multivariate normal distributions. Also, These inequalities can be proved by bringing the matrix "A" to the diagonal form. As such, they represent the well-known fact that the harmonic mean is less than the geometric mean, which is less than the arithmetic mean, which is, in turn, less than the root mean square. For a matrix equation the solution is given by Cramer's rule: where "A""i" is the matrix formed by replacing the "i"th column of "A" by the column vector "b". This follows immediately by column expansion of the determinant, i.e. where the vectors formula_11 are the columns of "A". The rule is also implied by the identity It has recently been shown that Cramer's rule can be implemented in O("n"3) time, which is comparable to more common methods of solving systems of linear equations, such as LU, QR, or singular value decomposition. Suppose "A", "B", "C", and "D" are matrices of dimension , , , and , respectively. Then This can be seen from the Leibniz formula for determinants, or from a decomposition like (for the former case) When "A" is invertible, one has as can be seen by employing the decomposition When "D" is invertible, a similar identity with formula_125 factored out can be derived analogously, that is, When the blocks are square matrices of the same order further formulas hold. For example, if "C" and "D" commute (i.e., ), then the following formula comparable to the determinant of a matrix holds: Generally, if all pairs of matrices of the block matrix commute, then the determinant of the block matrix is equal to the determinant of the matrix obtained by computing the determinant of the block matrix considering its entries as the entries of a matrix. As the previous formula shows, for "p" = 2, this criterion is sufficient, but not necessary. When "A" = "D" and "B" = "C", the blocks are square matrices of the same order and the following formula holds (even if "A" and "B" do not commute) When "D" is a 1×1 matrix, "B" is a column vector, and "C" is a row vector then Let formula_130 be a scalar complex number. If a block matrix is square, its characteristic polynomial can be factored with It can be seen, e.g. using the Leibniz formula, that the determinant of real (or analogously for complex) square matrices is a polynomial function from to R, and so it is everywhere differentiable. Its derivative can be expressed using Jacobi's formula: where adj("A") denotes the adjugate of "A". In particular, if "A" is invertible, we have Expressed in terms of the entries of "A", these are Yet another equivalent formulation is using big O notation. The special case where formula_136, the identity matrix, yields This identity is used in describing the tangent space of certain matrix Lie groups. If the matrix A is written as formula_138 where a, b, c are column vectors of length 3, then the gradient over one of the three vectors may be written as the cross product of the other two: The above identities concerning the determinant of products and inverses of matrices imply that similar matrices have the same determinant: two matrices "A" and "B" are similar, if there exists an invertible matrix "X" such that . Indeed, repeatedly applying the above identities yields The determinant is therefore also called a similarity invariant. The determinant of a linear transformation for some finite-dimensional vector space "V" is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of basis in "V". By the similarity invariance, this determinant is independent of the choice of the basis for "V" and therefore only depends on the endomorphism "T". The determinant of a linear transformation of an "n"-dimensional vector space "V" can be formulated in a coordinate-free manner by considering the "n"th exterior power Λ"n""V" of "V". "T" induces a linear map As Λ"n""V" is one-dimensional, the map Λ"n"T is given by multiplying with some scalar. This scalar coincides with the determinant of "T", that is to say This definition agrees with the more concrete coordinate-dependent definition. In particular, for a square formula_144 matrix "A" whose columns are formula_145, its determinant satisfies formula_146, where formula_147 is the standard basis of formula_148. This follows from the characterization of the determinant given above. For example, switching two columns changes the sign of the determinant; likewise, permuting the vectors in the exterior product to , say, also changes its sign. For this reason, the highest non-zero exterior power Λ"n"("V") is sometimes also called the determinant of "V" and similarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix can also be cast in this setting, by considering lower alternating forms Λ"k""V" with . The determinant can also be characterized as the unique function from the set of all matrices with entries in a field "K" to that field satisfying the following three properties: first, "D" is an "n"-linear function: considering all but one column of "A" fixed, the determinant is linear in the remaining column, that is for any column vectors "v"1, ..., "v""n", and "w" and any scalars (elements of "K") "a" and "b". Second, "D" is an alternating function: for any matrix "A" with two identical columns, . Finally, , where "I""n" is the identity matrix. This fact also implies that every other "n"-linear alternating function satisfies This definition can also be extended where "K" is a commutative ring "R", in which case a matrix is invertible if and only if its determinant is an invertible element in "R". For example, a matrix "A" with entries in Z, the integers, is invertible (in the sense that there exists an inverse matrix with integer entries) if the determinant is +1 or −1. Such a matrix is called unimodular. The determinant defines a mapping between the group of invertible matrices with entries in "R" and the multiplicative group of units in "R". Since it respects the multiplication in both groups, this map is a group homomorphism. Secondly, given a ring homomorphism , there is a map given by replacing all entries in "R" by their images under "f". The determinant respects these maps, i.e., given a matrix with entries in "R", the identity holds. In other words, the following diagram commutes: For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant of its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulo "m" of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo "m" (the latter determinant being computed using modular arithmetic). In the language of category theory, the determinant is a natural transformation between the two functors GL"n" and (⋅)× (see also Natural transformation#Determinant). Adding yet another layer of abstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general linear group to the multiplicative group, For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. Functional analysis provides different extensions of the determinant for such infinite-dimensional situations, which however only work for particular kinds of operators. The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriate generalization of the formula Another infinite-dimensional notion of determinant is the functional determinant. For operators in a finite factor, one may define a positive real-valued determinant called the Fuglede−Kadison determinant using the canonical trace. In fact, corresponding to every tracial state on a von Neumann algebra there is a notion of Fuglede−Kadison determinant. For square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but non-commutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or the fact that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero with a regular element of "R" as value on some pair of arguments implies that "R" is commutative). Nevertheless, various notions of non-commutative determinant have been formulated that preserve some of the properties of determinants, notably quasideterminants and the Dieudonné determinant. For some classes of matrices with non-commutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the "q"-determinant on quantum groups, the Capelli determinant on Capelli matrices, and the Berezinian on supermatrices. Manin matrices form the class closest to matrices with commutative elements. Determinants of matrices in superrings (that is, Z2-graded rings) are known as Berezinians or superdeterminants. The permanent of a matrix is defined as the determinant, except that the factors sgn("σ") occurring in Leibniz's rule are omitted. The immanant generalizes both by introducing a character of the symmetric group S"n" in Leibniz's rule. Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra, where for applications like checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques. Computational geometry, however, does frequently use calculations related to determinants. Naive methods of implementing an algorithm to compute the determinant include using the Leibniz formula or Laplace's formula. Both these approaches are extremely inefficient for large matrices, though, since the number of required operations grows very quickly: it is of order "n"! ("n" factorial) for an matrix "M". For example, Leibniz's formula requires calculating "n"! products. Therefore, more involved techniques have been developed for calculating determinants. Given a matrix "A", some methods compute its determinant by writing "A" as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LU decomposition, the QR decomposition or the Cholesky decomposition (for positive definite matrices). These methods are of order O("n"3), which is a significant improvement over O("n"!) The LU decomposition expresses "A" in terms of a lower triangular matrix "L", an upper triangular matrix "U" and a permutation matrix "P": The determinants of "L" and "U" can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of "P" is just the sign formula_157 of the corresponding permutation (which is +1 for an even number of permutations and is −1 for an odd number of permutations). The determinant of "A" is then (See determinant identities.) Moreover, the decomposition can be chosen such that "L" is a unitriangular matrix and therefore has determinant 1, in which case the formula further simplifies to If the determinant of "A" and the inverse of "A" have already been computed, the matrix determinant lemma allows rapid calculation of the determinant of , where "u" and "v" are column vectors. Since the definition of the determinant does not need divisions, a question arises: do fast algorithms exist that do not need divisions? This is especially interesting for matrices over rings. Indeed, algorithms with run-time proportional to "n"4 exist. An algorithm of Mahajan and Vinay, and Berkowitz is based on closed ordered walks (short "clow"). It computes more products than the determinant definition requires, but some of these products cancel and the sum of these products can be computed more efficiently. The final algorithm looks very much like an iterated product of triangular matrices. If two matrices of order "n" can be multiplied in time "M"("n"), where for some , then the determinant can be computed in time O("M"("n")). This means, for example, that an O("n"2.376) algorithm exists based on the Coppersmith–Winograd algorithm. Charles Dodgson (i.e. Lewis Carroll of "Alice's Adventures in Wonderland" fame) invented a method for computing determinants called Dodgson condensation. Unfortunately this interesting method does not always work in its original form. Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the Gaussian elimination (or LU decomposition) method is of order O("n"3), but the bit length of intermediate values can become exponentially long. The Bareiss Algorithm, on the other hand, is an exact-division method based on Sylvester's identity is also of order "n"3, but the bit complexity is roughly the bit size of the original entries in the matrix times "n". Historically, determinants were used long before matrices: A determinant was originally defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the Chinese mathematics textbook "The Nine Chapters on the Mathematical Art" (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz. In Japan, Seki Takakazu is credited with the discovery of the resultant and the determinant (at first in 1683, the complete version no later than 1710). In Europe, Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrence law was first announced by Bézout (1764). It was Vandermonde (1771) who first recognized determinants as independent functions. Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities. Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word determinant (Laplace had used "resultant"), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem. The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of "m" columns and "n" rows, which for the special case of reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy–Binet formula.) In this he used the word determinant in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality. The next important figure was Jacobi (from 1827). He early used the functional determinant which Sylvester later called the Jacobian, and in his memoirs in "Crelle's Journal" for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called "alternants". About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work. The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises. As mentioned above, the determinant of a matrix (with real or complex entries, say) is zero if and only if the column vectors (or the row vectors) of the matrix are linearly dependent. Thus, determinants can be used to characterize linearly dependent vectors. For example, given two linearly independent vectors "v"1, "v"2 in R3, a third vector "v"3 lies in the plane spanned by the former two vectors exactly if the determinant of the matrix consisting of the three vectors is zero. The same idea is also used in the theory of differential equations: given "n" functions "f"1("x"), ..., "f""n"("x") (supposed to be times differentiable), the Wronskian is defined to be It is non-zero (for some "x") in a specified interval if and only if the given functions and all their derivatives up to order "n"−1 are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, in the case of analytic functions, this implies the given functions are linearly dependent. See the Wronskian and linear independence. The determinant can be thought of as assigning a number to every sequence of "n" vectors in R"n", by using the square matrix whose columns are the given vectors. For instance, an orthogonal matrix with entries in R"n" represents an orthonormal basis in Euclidean space. The determinant of such a matrix determines whether the orientation of the basis is consistent with or opposite to the orientation of the standard basis. If the determinant is +1, the basis has the same orientation. If it is −1, the basis has the opposite orientation. More generally, if the determinant of "A" is positive, "A" represents an orientation-preserving linear transformation (if "A" is an orthogonal or matrix, this is a rotation), while if it is negative, "A" switches the orientation of the basis. As pointed out above, the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if is the linear map represented by the matrix "A", and "S" is any measurable subset of R"n", then the volume of "f"("S") is given by times the volume of "S". More generally, if the linear map is represented by the matrix "A", then the "n"-dimensional volume of "f"("S") is given by: By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. The volume of any tetrahedron, given its vertices a, b, c, and d, is , or any other combination of pairs of vertices that would form a spanning tree over the vertices. For a general differentiable function, much of the above carries over by considering the Jacobian matrix of "f". For the Jacobian matrix is the matrix whose entries are given by Its determinant, the Jacobian determinant, appears in the higher-dimensional version of integration by substitution: for suitable functions "f" and an open subset "U" of R"n" (the domain of "f"), the integral over "f"("U") of some other function is given by The Jacobian also occurs in the inverse function theorem. The third order Vandermonde determinant is In general, the "n"th-order Vandermonde determinant is where the right-hand side is the continued product of all the differences that can be formed from the pairs of numbers taken from "x"1, "x"2, ..., "x""n", with the order of the differences taken in the reversed order of the suffixes that are involved. Second order Third order where "ω" and "ω"2 are the complex cube roots of 1. In general, the "n"th-order circulant determinant is where "ω""j" is an "n"th root of 1.
https://en.wikipedia.org/wiki?curid=8468
David Ricardo David Ricardo (18 April 1772 – 11 September 1823) was a British political economist, one of the most influential of the classical economists along with Thomas Malthus, Adam Smith and James Mill. Born in London, England, Ricardo was the third surviving of the 17 children of Abigail Delvalle (1753–1801) and her husband Abraham Israel Ricardo (1733?–1812). His family were Sephardic Jews of Portuguese origin who had recently relocated from the Dutch Republic. His father was a successful stockbroker and Ricardo began working with him at the age of 14. At the age of 21, Ricardo eloped with a Quaker, Priscilla Anne Wilkinson, and, against his father's wishes, converted to the Unitarian faith. This religious difference resulted in estrangement from his family, and he was led to adopt a position of independence. His father disowned him and his mother apparently never spoke to him again. Following this estrangement he went into business for himself with the support of Lubbocks and Forster, an eminent banking house. He made the bulk of his fortune as a result of speculation on the outcome of the Battle of Waterloo. "The Sunday Times" reported in Ricardo's obituary, published on 14 September 1823, that during the Battle of Waterloo Ricardo "netted upwards of a million sterling", a huge sum at the time. He immediately retired, his position on the floor no longer tenable, and subsequently purchased Gatcombe Park, an estate in Gloucestershire, now owned by Princess Anne, the Princess Royal and retired to the country. He was appointed High Sheriff of Gloucestershire for 1818–19. In August 1818 he bought Lord Portarlington's seat in Parliament for £4,000, as part of the terms of a loan of £25,000. His record in Parliament was that of an earnest reformer. He held the seat until his death five years later. Ricardo was a close friend of James Mill. Other notable friends included Jeremy Bentham and Thomas Malthus, with whom Ricardo had a considerable debate (in correspondence) over such things as the role of landowners in a society. He also was a member of Malthus' Political Economy Club, and a member of the King of Clubs. He was one of the original members of The Geological Society. His youngest sister was author Sarah Ricardo-Porter (e.g., "Conversations in Arithmetic"). As MP for Portarlington, he voted with the opposition in support of the liberal movements in Naples, 21 February, and Sicily, 21 June, and for inquiry into the administration of justice in Tobago, 6 June. He divided for repeal of the Blasphemous and Seditious Libels Act, 8 May, inquiry into the Peterloo massacre, 16 May, and abolition of the death penalty for forgery, 25 May, 4 June 1821. He adamantly supported the implementation of free trade. He voted against renewal of the sugar duties, 9 Feb, and objected to the higher duty on East as opposed to West Indian produce, 4 May 1821. He opposed the timber duties. He voted silently for parliamentary reform, 25 Apr 3 June, and spoke in its favour at the Westminster anniversary reform dinner, 23 May 1822. He again voted for criminal law reform, 4 June. His friend John Louis Mallett commented: " … he meets you upon every subject that he has studied with a mind made up, and opinions in the nature of mathematical truths. He spoke of parliamentary reform and ballot as a man who would bring such things about, and destroy the existing system tomorrow, if it were in his power, and without the slightest doubt on the result … It is this very quality of the man’s mind, his entire disregard of experience and practice, which makes me doubtful of his opinions on political economy." Ten years after retiring and four years after entering Parliament Ricardo died from an infection of the middle ear that spread into his brain and induced septicaemia. He was 51. He and his wife Priscilla had eight children together including Osman Ricardo (1795–1881; MP for Worcester 1847–1865), David Ricardo (1803–1864, MP for Stroud 1832–1833) and Mortimer Ricardo, who served as an officer in the Life Guards and was a deputy lieutenant for Oxfordshire. Ricardo is buried in an ornate grave in the churchyard of Saint Nicholas in Hardenhuish, now a suburb of Chippenham, Wiltshire. At the time of his death his assets were estimated at between £675,000–£775,000. He wrote his first economics article at age 37, firstly in "The Morning Chronicle" advocating reduction in the note-issuing of the Bank of England and then publishing "The High Price of Bullion, a Proof of the Depreciation of Bank Notes" in 1810. He was also an abolitionist, speaking at a meeting of the Court of the East India Company in March 1823, where he said he regarded slavery as a stain on the character of the nation. Ricardo's most famous work is his "Principles of Political Economy and Taxation" (1817). He advanced a labor theory of value: The value of a commodity, or the quantity of any other commodity for which it will exchange, depends on the relative quantity of labour which is necessary for its production, and not on the greater or less compensation which is paid for that labour. Ricardo's note to Section VI: Mr. Malthus appears to think that it is a part of my doctrine, that the cost and value of a thing be the same;—it is, if he means by cost, "cost of production" including profit. Ricardo contributed to the development of theories of rent, wages, and profits. He defined rent as "the difference between the produce obtained by the employment of two equal quantities of capital and labor." Ricardo believed that the process of economic development, which increased land utilization and eventually led to the cultivation of poorer land, principally benefited landowners. According to Ricardo, such premium over "real social value" that is reaped due to ownership constitutes value to an individual but is at best a paper monetary return to "society". The portion of such purely individual benefit that accrues to scarce resources Ricardo labels "rent". In his "Theory of Profit", Ricardo stated that as real wages increase, real profits decrease because the revenue from the sale of manufactured goods is split between profits and wages. He said in his "Essay on Profits", "Profits depend on high or low wages, wages on the price of necessaries, and the price of necessaries chiefly on the price of food." Between 1500 and 1750 most economists advocated Mercantilism which promoted the idea of international trade for the purpose of earning bullion by running a trade surplus with other countries. Ricardo challenged the idea that the purpose of trade was merely to accumulate gold or silver. With "comparative advantage" Ricardo argued in favour of industry specialisation and free trade. He suggested that industry specialization combined with free international trade always produces positive results. This theory expanded on the concept of absolute advantage. Ricardo suggested that there is mutual national benefit from trade even if one country is more competitive in every area than its trading counterpart and that a nation should concentrate resources only in industries where it has a comparative advantage, that is in those industries in which it has the greatest competitive edge. Ricardo suggested that national industries which were, in fact, profitable and internationally competitive should be jettisoned in favour of the most competitive industries, the assumption being that subsequent economic growth would more than offset any economic dislocation which would result from closing profitable and competitive national industries. Ricardo attempted to prove theoretically that international trade is always beneficial. Paul Samuelson called the numbers used in Ricardo's example dealing with trade between England and Portugal the "four magic numbers". "In spite of the fact that the Portuguese could produce both cloth and wine with less amount of labor, Ricardo suggested that both countries would benefit from trade with each other". As for recent extensions of Ricardian models, see Ricardian trade theory extensions. Ricardo's theory of international trade was reformulated by John Stuart Mill. The term "comparative advantage" was started by J. S. Mill and his contemporaries. John Stuart Mill started a neoclassical turn of international trade theory, i.e. his formulation was inherited by Alfred Marshall and others and contributed to the resurrection of anti-Ricardian concept of law of supply and demand and induce the arrival neoclassical theory of value. Ricardo's four magic numbers has long been interpreted as comparison of two ratios of labor input coefficients. This interpretation is now considered as erroneous. This point was first made by Roy J. Ruffin in 2002 and examined and explained in detail in Andrea Maneschi in 2004. This is now known as "new interpretation" but it has been mentioned by P. Sraffa in 1930 and by Kenzo Yukizawa in 1974. The new interpretation affords totally new reading of Ricardo's "Principles of Political Economy and Taxation" with regards to trade theory. Like Adam Smith, Ricardo was an opponent of protectionism for national economies, especially for agriculture. He believed that the British "Corn Laws" – imposing tariffs on agricultural products – ensured that less-productive domestic land would be cultivated and rents would be driven up . Thus, profits would be directed toward landlords and away from the emerging industrial capitalists. Ricardo believed landlords tended to squander their wealth on luxuries, rather than invest. He believed the Corn Laws were leading to the stagnation of the British economy. In 1846, his nephew John Lewis Ricardo, MP for Stoke-upon-Trent, advocated free trade and the repeal of the Corn Laws. Modern empirical analysis of the Corn Laws yields mixed results. Parliament repealed the Corn Laws in 1846. Ricardo was concerned about the impact of technological change on labor in the short-term. In 1821, he wrote that he had become "convinced that the substitution of machinery for human labour, is often very injurious to the interests of the class of labourers," and that "the opinion entertained by the labouring class, that the employment of machinery is frequently detrimental to their interests, is not founded on prejudice and error, but is conformable to the correct principles of political economy." Ricardo himself was the first to recognize that comparative advantage is a domain-specific theory, meaning that it applies only when certain conditions are met. Ricardo noted that the theory applies only in situations where capital is immobile. Regarding his famous example, he wrote:"it would undoubtedly be advantageous to the capitalists [and consumers] of England… [that] the wine and cloth should both be made in Portugal [and that] the capital and labour of England employed in making cloth should be removed to Portugal for that purpose."Ricardo recognized that applying his theory in situations where capital was mobile would result in offshoring, and thereby economic decline and job loss. To correct for this, he argued that (i) "most men of property [will be] satisfied with a low rate of profits in their own country, rather than seek[ing] a more advantageous employment for their wealth in foreign nations," and (ii) that capital was functionally immobile. Ricardo's argument in favour of free trade has also been attacked by those who believe trade restriction can be necessary for the economic development of a nation. Utsa Patnaik claims that Ricardian theory of international trade contains a logical fallacy. Ricardo assumed that in both countries two goods are producible and actually are produced, but developed and underdeveloped countries often trade those goods which are not producible in their own country. In these cases, one cannot define which country has comparative advantage. Critics also argue that Ricardo's theory of comparative advantage is flawed in that it assumes production is continuous and absolute. In the real world, events outside the realm of human control (e.g. natural disasters) can disrupt production. In this case, specialisation could cripple a country that depends on imports from foreign, naturally disrupted countries. For example, if an industrially based country trades its manufactured goods with an agrarian country in exchange for agricultural products, a natural disaster in the agricultural country (e.g. drought) may cause the industrially based country to starve. As Joan Robinson pointed out, following the opening of free trade with England, Portugal endured centuries of economic underdevelopment: "the imposition of free trade on Portugal killed off a promising textile industry and left her with a slow-growing export market for wine, while for England, exports of cotton cloth led to accumulation, mechanisation and the whole spiralling growth of the industrial revolution". Robinson argued that Ricardo's example required that economies be in static equilibrium positions with full employment and that there could not be a trade deficit or a trade surplus. These conditions, she wrote, were not relevant to the real world. She also argued that Ricardo's math did not take into account that some countries may be at different levels of development and that this raised the prospect of 'unequal exchange' which might hamper a country's development, as we saw in the case of Portugal. The development economist Ha-Joon Chang challenges the argument that free trade benefits every country: Ricardo’s theory is absolutely right—within its narrow confines. His theory correctly says that, "accepting their current levels of technology as given", it is better for countries to specialize in things that they are relatively better at. One cannot argue with that. His theory fails when a country wants to acquire more advanced technologies—that is, when it wants to develop its economy. It takes time and experience to absorb new technologies, so technologically backward producers need a period of protection from international competition during this period of learning. Such protection is costly, because the country is giving up the chance to import better and cheaper products. However, it is a price that has to be paid if it wants to develop advanced industries. Ricardo’s theory is, thus seen, for those who accept the "status quo" but not for those who want to change it. Another idea associated with Ricardo is Ricardian equivalence, an argument suggesting that in some circumstances a government's choice of how to pay for its spending ("i.e.," whether to use tax revenue or issue debt and run a deficit) might have no effect on the economy. This is due to the fact the public saves its excess money to pay for expected future tax increases that will be used to pay off the debt. Ricardo notes that the proposition is theoretically implied in the presence of intertemporal optimisation by rational tax-payers: but that since tax-payers do not act so rationally, the proposition fails to be true in practice. Thus, while the proposition bears his name, he does not seem to have believed it. Economist Robert Barro is responsible for its modern prominence. David Ricardo's ideas had a tremendous influence on later developments in economics. US economists rank Ricardo as the second most influential economic thinker, behind Adam Smith, prior to the twentieth century. Ricardo became the theoretical father of classical political economy. However, Schumpeter coined an expression "Ricardian vice", which indicates that rigorous logic does not provide a good economic theory. This criticism applies also to most neoclassical theories, which make heavy use of mathematics, but are, according to him, theoretically unsound, because the conclusion being drawn does not logically follow from the theories used to defend it. Ricardo's writings fascinated a number of early socialists in the 1820s, who thought his value theory had radical implications. They argued that, in view of labor theory of value, labor produces the entire product, and the profits capitalists get are a result of exploitations of workers. These include Thomas Hodgskin, William Thompson, John Francis Bray, and Percy Ravenstone. Georgists believe that rent, in the sense that Ricardo used, belongs to the community as a whole. Henry George was greatly influenced by Ricardo, and often cited him, including in his most famous work, Progress and Poverty from 1879. In the preface to the fourth edition, he wrote: ""What I have done in this book, if I have correctly solved the great problem I have sought to investigate, is, to unite the truth perceived by the school of Smith and Ricardo to the truth perceived by the school of Proudhon and Lasalle; to show that laissez faire (in its full true meaning) opens the way to a realization of the noble dreams of socialism; to identify social law with moral law, and to disprove ideas which in the minds of many cloud grand and elevating perceptions."" After the rise of the 'neoclassical' school, Ricardo's influence declined temporarily. It was Piero Sraffa, the editor of the Collected Works of David Ricardo and the author of seminal "Production of Commodities by Means of Commodities", who resurrected Ricardo as the originator of another strand of economics thought, which was effaced with the arrival of the neoclassical school. The new interpretation of Ricardo and Sraffa's criticism against the marginal theory of value gave rise to a new school, now named neo-Ricardian or Sraffian school. Major contributors to this school includes Luigi Pasinetti (1930–), Pierangelo Garegnani (1930–2011), Ian Steedman (1941–), Geoffrey Harcourt (1931–), Heinz Kurz (1946–), Neri Salvadori (1951–), Pier Paolo Saviotti (–) among others. See also Neo-Ricardianism. The Neo-Ricardian school is sometimes seen to be a component of Post-Keynesian economics. Inspired by Piero Sraffa, a new strand of trade theory emerged and was named neo-Ricardian trade theory. The main contributors include Ian Steedman and Stanley Metcalfe. They have criticised neoclassical international trade theory, namely the Heckscher–Ohlin model on the basis that the notion of capital as primary factor has no method of measuring it before the determination of profit rate (thus trapped in a logical vicious circle). This was a second round of the Cambridge capital controversy, this time in the field of international trade. Depoortère and Ravix judge that neo-Ricardian contribution failed without giving effective impact on neoclassical trade theory, because it could not offer "a genuine alternative approach from a classical point of view." Several distinctive groups have sprung out of the neo-Ricardian school. One is the evolutionary growth theory, developed notably by Luigi Pasinetti, J.S. Metcalfe, Pier Paolo Saviotti, and Koen Frenken and others. Pasinetti argued that the demand for any commodity came to stagnate and frequently decline, demand saturation occurs. Introduction of new commodities (goods and services) is necessary to avoid economic stagnation. Ricardo's idea was even expanded to the case of continuum of goods by Dornbusch, Fischer, and Samuelson This formulation is employed for example by Matsuyama and others. Ricardian trade theory ordinarily assumes that the labour is the unique input. This is a deficiency as intermediate goods occupies now a great part of international trade. The situation changed after the appearance of 's work of 2007. He has succeeded to incorporate traded input goods in his model. Yeats found that 30% of world trade in manufacturing is intermediate inputs. Bardhan and Jafee found that intermediate inputs occupy 37 to 38% in the imports to the US for the years from 1992 to 1997, whereas the percentage of intrafirm trade grew from 43% in 1992 to 52% in 1997. Chris Edward includes Emmanuel's unequal exchange theory among variations of neo-Ricardian trade theory. Arghiri Emmanuel argued that the Third World is poor because of the international exploitation of labour. The unequal exchange theory of trade has been influential to the (new) dependency theory. Ricardo's publications included: His works and writings were collected in
https://en.wikipedia.org/wiki?curid=8470
Delphinus Delphinus (Eng. U.S. ; Eng. oth: ) is a constellation in the northern sky, close to the celestial equator. Its name is the Latin version for the Greek word for dolphin (δελφίνι). Delphinus was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains among the 88 modern constellations recognized by the International Astronomical Union. It is one of the smaller constellations, ranked 69th in size. Delphinus' brightest stars form a distinctive asterism that can easily be recognized. It is bordered (clockwise from north) by Vulpecula the fox, Sagitta the arrow, Aquila the eagle, Aquarius the water-carrier, Equuleus the foal and Pegasus the flying horse. Delphinus lacks stars above fourth (apparent) magnitude; its brightest star is of magnitude 3.8. The main asterism in Delphinus is Job's Coffin, nearly a 45°-apex lozenge diamond of the four brightest stars: Alpha, Beta, Gamma, and Delta Delphini. Delphinus is in a rich Milky Way star field. Alpha and Beta Delphini have 19th century names Sualocin and Rotanev, read backwards: Nicolaus Venator, the Latinized name of a Palermo Observatory director, Niccolò Cacciatore (d. 1841). Alpha Delphini is a blue-white hued main sequence star of magnitude 3.8, 241 light-years from Earth. Beta Delphini, called Rotanev. The gap between its close binary stars is visible from large amateur telescopes. To the unaided eye, it appears to be a white star of magnitude 3.6. It has a period of 27 years and is 97 light-years from Earth. Gamma Delphini is a celebrated binary star among amateur astronomers. The primary is orange-gold of magnitude 4.3; the secondary is a light yellow star of magnitude 5.1. The pair form a true binary with an estimated orbital period of over 3,000 years. 125 light-years away, the two components are visible in a small amateur telescope. The secondary, also described as green, is 10 arcseconds from the primary. Struve 2725, called the "Ghost Double", is a pair that appears similar but dimmer. Its components of magnitudes 7.6 and 8.4 are separated by 6 arcseconds and are 15 arcminutes from Gamma Delphini itself. Delta Delphini is a type A7 IIIp star of magnitude 4.43. Epsilon Delphini, Deneb Dulfim (lit. "tail [of the] Dolphin"), or Aldulfin, is a star of stellar class B6 III and magnitude 4, at 330 ly. In Delphinus, in extremes of distance, Gliese 795 is the closest known star at 54.95 ly and rapidly moves east over a period of centuries (863±3 arcseconds per year); whereas the giant of blue colour, W Delphini is at 2203.81 ly at 9.76 magnitude. Its brightness ranges from a magnitude of 12.3 to a magnitude of 9.7 over its variable period as it is a Beta Persei star-type semi-detached system. Other variable stars of large amateur telescopic visibility include R Delphini, a Mira-type variable star with a period of 285.5 days. Its magnitude ranges between a maximum 7.6 and a minimum 13.8. Rho Aquilae at magnitude 4.94 is at about 150 light years. Due to its proper motion it has been in the (round-figure parameter) bounds of the constellation since 1992. HR Delphini was a nova that brightened to magnitude 3.5 in December 1967. A nova was discovered by amateur astronomer Koichi Itagaki, Nova Delphini 2013. Giants within our galaxy in Delphinus aside from Delta, Gamma and Epsilon include: Its rich Milky Way star field means many modestly deep-sky objects. NGC 6891 is a planetary nebula of magnitude 10.5; another is NGC 6905 or the Blue Flash nebula. NGC 6934 is a globular cluster of magnitude 9.75. At a distance of about 185,000 light-years, the globular cluster NGC 7006 is at the outer reaches of the galaxy. It is also fairly dim at magnitude 11.5. Delphinus is associated with two stories from Greek mythology. According to the first Greek god Poseidon wanted to marry Amphitrite, a beautiful nereid. However, wanting to protect her virginity, she fled to the Atlas mountains. Her suitor then sent out several searchers, among them a certain Delphinus. Delphinus accidentally stumbled upon her and was able to persuade Amphitrite to accept Poseidon's wooing. Out of gratitude the god placed the image of a dolphin among the stars. The second story tells of the Greek poet Arion of Lesbos (7th century BC), who was saved by a dolphin. He was a court musician at the palace of Periander, ruler of Corinth. Arion had amassed a fortune during his travels to Sicily and Italy. On his way home from Tarentum his wealth caused the crew of his ship to conspire against him. Threatened with death, Arion asked to be granted a last wish which the crew granted: he wanted to sing a dirge. This he did, and while doing so, flung himself into the sea. There, he was rescued by a dolphin which had been charmed by Arion's music. The dolphin carried Arion to the coast of Greece and left. In Chinese astronomy, the stars of Delphinus are located within "the Black Tortoise of the North" (北方玄武, "Běi Fāng Xuán Wǔ"). In Polynesia, two cultures recognized Delphinus as a constellation. In Pukapuka, it was called "Te Toloa" and in the Tuamotus, it was called "Te Uru-o-tiki". USS Delphinus (AF-24) and USS Delphinus (PHM-1), two United States Navy ships, are named after the constellation. A house at Sutton Girls is named Delphinus
https://en.wikipedia.org/wiki?curid=8471
Disk storage Disk storage (also sometimes called drive storage) is a general category of storage mechanisms where data is recorded by various electronic, magnetic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism. Notable types are the hard disk drive (HDD) containing a non-removable disk, the floppy disk drive (FDD) and its removable floppy disk, and various optical disc drives (ODD) and associated optical disc media. (The spelling disk and disc are used interchangeably except where trademarks preclude one usage, e.g. the Compact Disc logo. The choice of a particular form is frequently historical, as in IBM's usage of the "disk" form beginning in 1956 with the "IBM 350 disk storage unit"). Audio information was originally recorded by analog methods (see Sound recording and reproduction). Similarly the first video disc used an analog recording method. In the music industry, analog recording has been mostly replaced by digital optical technology where the data is recorded in a digital format with optical information. The first commercial digital disk storage device was the IBM 350 which shipped in 1956 as a part of the IBM 305 RAMAC computing system. The random-access, low-density storage of disks was developed to complement the already used sequential-access, high-density storage provided by tape drives using magnetic tape. Vigorous innovation in disk storage technology, coupled with less vigorous innovation in tape storage, has reduced the difference in acquisition cost per terabyte between disk storage and tape storage; however, the total cost of ownership of data on disk including power and management remains larger than that of tape. Disk storage is now used in both computer storage and consumer electronic storage, e.g., audio CDs and video discs (VCD, standard DVD and Blu-ray). Data on modern disks is stored in fixed length blocks, usually called sectors and varying in length from a few hundred to many thousands of bytes. Gross disk drive capacity is simply the number of disk surfaces times the number of blocks/surface times the number of bytes/block. In certain legacy IBM CKD drives the data was stored on magnetic disks with variable length blocks, called records; record length could vary on and between disks. Capacity decreased as record length decreased due to the necessary gaps between blocks. Digital disk drives are block storage devices. Each disk is divided into logical blocks (collection of sectors). Blocks are addressed using their logical block addresses (LBA). Read from or writing to disk happens at the granularity of blocks. Originally the disk capacity was quite low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, meaning that more tracks could be stored on each of the disks. Advancements in data compression methods permitted more information to be stored in each of the individual sectors. The drive stores data onto cylinders, heads, and sectors. The sectors unit is the smallest size of data to be stored in a hard disk drive and each file will have many sectors units assigned to it. The smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples (two bytes × two channels × six samples = 24 bytes). The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display. The information is sent from the computer processor to the BIOS into a chip controlling the data transfer. This is then sent out to the hard drive via a multi-wire connector. Once the data is received onto the circuit board of the drive, they are translated and compressed into a format that the individual drive can use to store onto the disk itself. The data is then passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks. An HDD with two disks internally will typically store data on all four surfaces. The hardware on the drive tells the actuator arm where it is to go for the relevant track and the compressed information is then sent down to the head which changes the physical properties, optically or magnetically for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner, rather, it is held in the best way for quickest retrieval. Mechanically there are two different motions occurring inside the drive. One is the rotation of the disks inside the device. The other is the side-to-side motion of the head across the disk as it moves between tracks. There are two types of disk rotation methods: Track positioning also follows two different methods across disk storage devices. Storage devices focused on holding computer data, e.g., HDDs, FDDs, Iomega zip drives, use concentric tracks to store data. During a sequential read or write operation, after the drive accesses all the sectors in a track it repositions the head(s) to the next track. This will cause a momentary delay in the flow of data between the device and the computer. In contrast, optical audio and video discs use a single spiral track that starts at the inner most point on the disc and flows continuously to the outer edge. When reading or writing data there is no need to stop the flow of data to switch tracks. This is similar to vinyl records except vinyl records started at the outer edge and spiraled in toward the center. The disk drive interface is the mechanism/protocol of communication between the rest of the system and the disk drive itself. Storage devices intended for desktop and mobile computers typically use ATA (PATA) and SATA interfaces. Enterprise systems and high-end storage devices will typically use SCSI, SAS, and FC interfaces in addition to some use of SATA.
https://en.wikipedia.org/wiki?curid=8472
Arthur Wellesley, 1st Duke of Wellington Arthur Wellesley, 1st Duke of Wellington, (1 May 1769 – 14 September 1852) was an Anglo-Irish soldier and Tory statesman who was one of the leading military and political figures of 19th-century Britain, serving twice as Prime Minister. He ended the Napoleonic Wars when he defeated Napoleon at the Battle of Waterloo in 1815. Wellesley was born in Dublin into the Protestant Ascendancy in Ireland. He was commissioned as an ensign in the British Army in 1787, serving in Ireland as aide-de-camp to two successive Lords Lieutenant of Ireland. He was also elected as a Member of Parliament in the Irish House of Commons. He was a colonel by 1796 and saw action in the Netherlands and in India, where he fought in the Fourth Anglo-Mysore War at the Battle of Seringapatam. He was appointed governor of Seringapatam and Mysore in 1799 and, as a newly appointed major-general, won a decisive victory over the Maratha Confederacy at the Battle of Assaye in 1803. Wellesley rose to prominence as a general during the Peninsular campaign of the Napoleonic Wars, and was promoted to the rank of field marshal after leading the allied forces to victory against the French Empire at the Battle of Vitoria in 1813. Following Napoleon's exile in 1814, he served as the ambassador to France and was granted a dukedom. During the Hundred Days in 1815, he commanded the allied army which, together with a Prussian Army under Blücher, defeated Napoleon at Waterloo. Wellington's battle record is exemplary; he ultimately participated in some 60 battles during the course of his military career. Wellington is famous for his adaptive defensive style of warfare, resulting in several victories against numerically superior forces while minimising his own losses. He is regarded as one of the greatest defensive commanders of all time, and many of his tactics and battle plans are still studied in military academies around the world. After the end of his active military career, he returned to politics. He was twice British prime minister as a member of the Tory party: from 1828 to 1830, and for a little less than a month in 1834. He oversaw the passage of the Roman Catholic Relief Act 1829, but opposed the Reform Act 1832. He continued as one of the leading figures in the House of Lords until his retirement and remained Commander-in-Chief of the British Army until his death. Wellesley was born into an aristocratic Anglo-Irish family in Ireland as The Hon. Arthur Wesley, the third of five surviving sons (fourth otherwise) of Anne and Garret Wesley, 1st Earl of Mornington. His mother was the eldest daughter of Arthur Hill-Trevor, 1st Viscount Dungannon, after whom Wellesley was named. As such, he belonged to the Protestant Ascendancy. His biographers mostly follow the same contemporary newspaper evidence in saying that he was born on 1 May 1769, the day before he was baptised. His birthplace is uncertain. He was most likely born at his parents' townhouse, 24 Upper Merrion Street, Dublin, now the Merrion Hotel. But his mother Anne, Countess of Mornington, recalled in 1815 that he had been born at 6 Merrion Street, Dublin. Other places have been put forward as the location of his birth, including Mornington House (the house next door on Upper Merrion), as his father had asserted; the Dublin packet boat; and the mansion in the family estate of Athy (consumed in the fires of 1916), as the Duke apparently put on his 1851 census return. He spent most of his childhood at his family's two homes, the first a large house in Dublin and the second Dangan Castle, north of Summerhill on the Trim Road (now the R158) in County Meath. In 1781, Arthur's father died and his eldest brother Richard inherited his father's earldom. He went to the diocesan school in Trim when at Dangan, Mr Whyte's Academy when in Dublin, and Brown's School in Chelsea when in London. He then enrolled at Eton College, where he studied from 1781 to 1784. His loneliness there caused him to hate it, and makes it highly unlikely that he actually said "The Battle of Waterloo was won on the playing fields of Eton", a quotation which is often attributed to him. Moreover, Eton had no playing fields at the time. In 1785, a lack of success at Eton, combined with a shortage of family funds due to his father's death, forced the young Wellesley and his mother to move to Brussels. Until his early twenties, Arthur showed little sign of distinction and his mother grew increasingly concerned at his idleness, stating, "I don't know what I shall do with my awkward son Arthur." A year later, Arthur enrolled in the French Royal Academy of Equitation in Angers, where he progressed significantly, becoming a good horseman and learning French, which later proved very useful. Upon returning to England in late 1786, he astonished his mother with his improvement. Despite his new promise, he had yet to find a job and his family was still short of money, so upon the advice of his mother, his brother Richard asked his friend the Duke of Rutland (then Lord Lieutenant of Ireland) to consider Arthur for a commission in the Army. Soon afterward, on 7 March 1787, he was gazetted ensign in the 73rd Regiment of Foot. In October, with the assistance of his brother, he was assigned as "aide-de-camp", on ten shillings a day (twice his pay as an ensign), to the new Lord Lieutenant of Ireland, Lord Buckingham. He was also transferred to the new 76th Regiment forming in Ireland and on Christmas Day, 1787, was promoted lieutenant. During his time in Dublin his duties were mainly social; attending balls, entertaining guests and providing advice to Buckingham. While in Ireland, he overextended himself in borrowing due to his occasional gambling, but in his defence stated that "I have often known what it was to be in want of money, but I have never got helplessly into debt". On 23 January 1788, he transferred into the 41st Regiment of Foot, then again on 25 June 1789, still a lieutenant, he transferred to the 12th (Prince of Wales's) Regiment of (Light) Dragoons and, according to military historian Richard Holmes, he also dipped a reluctant toe into politics. Shortly before the general election of 1789, he went to the rotten borough of Trim to speak against the granting of the title "Freeman" of Dublin to the parliamentary leader of the Irish Patriot Party, Henry Grattan. Succeeding, he was later nominated and duly elected as a Member of Parliament (MP) for Trim in the Irish House of Commons. Because of the limited suffrage at the time, he sat in a parliament where at least two-thirds of the members owed their election to the landowners of fewer than a hundred boroughs. Wellesley continued to serve at Dublin Castle, voting with the government in the Irish parliament over the next two years. He became a captain on 30 January 1791, and was transferred to the 58th Regiment of Foot. On 31 October, he transferred to the 18th Light Dragoons and it was during this period that he grew increasingly attracted to Kitty Pakenham, the daughter of Edward Pakenham, 2nd Baron Longford. She was described as being full of 'gaiety and charm'. In 1793, he sought her hand, but was turned down by her brother Thomas, Earl of Longford, who considered Wellesley to be a young man, in debt, with very poor prospects. An aspiring amateur musician, Wellesley, devastated by the rejection, burnt his violins in anger, and resolved to pursue a military career in earnest. He became a major by purchase in the 33rd Regiment in 1793. A few months later, in September, his brother lent him more money and with it he purchased a lieutenant-colonelcy in the 33rd. In 1793, the Duke of York was sent to Flanders in command of the British contingent of an allied force destined for the invasion of France. In June 1794, Wellesley with the 33rd regiment set sail from Cork bound for Ostend as part of an expedition bringing reinforcements for the army in Flanders. They arrived too late and joined the Duke of York as he was pulling back towards the Netherlands. On 15 September 1794, at the Battle of Boxtel, east of Breda, Wellington, in temporary command of his brigade, had his first experience of battle. During General Abercromby's withdrawal in the face of superior French forces, the 33rd held off enemy cavalry, allowing neighbouring units to retreat safely. During the extremely harsh winter that followed, Wellesley and his regiment formed part of an allied force holding the defence line along the Waal River. The 33rd, along with the rest of the army, suffered heavy losses from sickness and exposure. Wellesley's health was also affected by the damp environment. Though the campaign was to end disastrously, with the British army driven out of the United Provinces into Germany, Wellesley was to learn several valuable lessons, including the use of steady lines of infantry against advancing columns and of the merits of supporting sea-power. He understood that the failure of the campaign was due in part to the faults of the leaders and the poor organisation at headquarters. He remarked later of his time in the Netherlands that "At least I learned what not to do, and that is always a valuable lesson". Returning to England in March 1795, he was returned as a member of parliament for Trim for a second time. He hoped to be given the position of secretary of war in the new Irish government but the new lord-lieutenant, Lord Camden, was only able to offer him the post of Surveyor-General of the Ordnance. Declining the post, he returned to his regiment, now at Southampton preparing to set sail for the West Indies. After seven weeks at sea, a storm forced the fleet back to Poole. The 33rd was given time to recuperate and a few months later, Whitehall decided to send the regiment to India. Wellesley was promoted full colonel by seniority on 3 May 1796 and a few weeks later set sail for Calcutta with his regiment. Arriving in Calcutta in February 1797 he spent several months there, before being sent on a brief expedition to the Philippines, where he established a list of new hygiene precautions for his men to deal with the unfamiliar climate. Returning in November to India, he learnt that his elder brother Richard, now known as Lord Mornington, had been appointed as the new Governor-General of India. In 1798, he changed the spelling of his surname to "Wellesley"; up to this time he was still known as Wesley, which his eldest brother considered the ancient and proper spelling. As part of the campaign to extend the rule of the British East India Company, the Fourth Anglo-Mysore War broke out in 1798 against the Sultan of Mysore, Tipu Sultan. Arthur's brother Richard ordered that an armed force be sent to capture Seringapatam and defeat Tipu. Under the command of General Harris, some 24,000 troops were dispatched to Madras (to join an equal force being sent from Bombay in the west). Arthur and the 33rd sailed to join them in August. After extensive and careful logistic preparation (which would become one of Wellesley's main attributes) the 33rd left with the main force in December and travelled across of jungle from Madras to Mysore. On account of his brother, during the journey, Wellesley was given an additional command, that of chief advisor to the Nizam of Hyderabad's army (sent to accompany the British force). This position was to cause friction among many of the senior officers (some of whom were senior to Wellesley). Much of this friction was put to rest after the Battle of Mallavelly, some from Seringapatam, in which Harris's army attacked a large part of the sultan's army. During the battle, Wellesley led his men, in a line of battle of two ranks, against the enemy to a gentle ridge and gave the order to fire. After an extensive repetition of volleys, followed by a bayonet charge, the 33rd, in conjunction with the rest of Harris's force, forced Tipu's infantry to retreat. Immediately after their arrival at Seringapatam on 5 April 1799, the Battle of Seringapatam began and Wellesley was ordered to lead a night attack on the village of Sultanpettah, adjacent to the fortress to clear the way for the artillery. Because of the enemy's strong defensive preparations, and the darkness, with the resulting confusion, the attack failed with 25 casualties. Wellesley suffered a minor injury to his knee from a spent musket-ball. Although they would re-attack successfully the next day, after time to scout ahead the enemy's positions, the affair affected Wellesley. He resolved "never to attack an enemy who is preparing and strongly posted, and whose posts have not been reconnoitered by daylight". Lewin Bentham Bowring gives this alternative account: A few weeks later, after extensive artillery bombardment, a breach was opened in the main walls of the fortress of Seringapatam. An attack led by Major-General Baird secured the fortress. Wellesley secured the rear of the advance, posting guards at the breach and then stationed his regiment at the main palace. After hearing news of the death of the Tipu Sultan, Wellesley was the first at the scene to confirm his death, checking his pulse. Over the coming day, Wellesley grew increasingly concerned over the lack of discipline among his men, who drank and pillaged the fortress and city. To restore order, several soldiers were flogged and four hanged. After battle and the resulting end of the war, the main force under General Harris left Seringapatam and Wellesley, aged 30, stayed behind to command the area as the new Governor of Seringapatam and Mysore. While in India, Wellesley was ill for a considerable time, first with severe diarrhoea from the water and then with fever, followed by a serious skin infection caused by trichophyton. Wellesley was in charge of raising an Anglo-Indian expeditionary force in Trincomali in early 1801 for the capture of Batavia and Mauritius from the French. However, on the eve of its departure, orders arrived from England that it was to be sent to Egypt to co-operate with Sir Ralph Abercromby in the expulsion of the French from Egypt. Wellesley had been appointed second in command to Baird, but owing to ill-health did not accompany the expedition on 9 April 1801. This turned out fortunately for Wellesley, since the very vessel on which he was to have sailed foundered with all hands in the Red Sea. He was promoted to brigadier-general on 17 July 1801. He took residence within the Sultan's summer palace and reformed the tax and justice systems in his province to maintain order and prevent bribery. He also hunted down the mercenary and self-proclaimed 'King' Dhoondiah Waugh, who had escaped from prison in Seringapatam during the battle. In 1800, whilst serving as Governor of Mysore, Wellesley was tasked with putting down an insurgency led by Dhoondiah Waugh, formerly a Patan trooper for Tipu Sultan. After the fall of Seringapatam he became a powerful brigand, raiding villages along the Maratha–Mysore border region. Despite initial setbacks, the East India Company having pursued and destroyed his forces once already, forcing him into retreat in August 1799, he raised a sizeable force composed of disbanded Mysore soldiers, captured small outposts and forts in Mysore, and was receiving the support of several Maratha "killedars" opposed to British occupation. This drew the attention of the British administration, who were beginning to recognise him as more than just a bandit, as his raids, expansion and threats to destabilise British authority suddenly increased in 1800. The death of Tipu Sultan had created a power vacuum and Waugh was seeking to fill it. Given independent command of a combined East India Company and British Army force, Wellesley ventured north to confront Waugh in June 1800, with an army of 8,000 infantry and cavalry, having learned that Waugh's forces numbered over 50,000, although the majority (around 30,000) were irregular light cavalry and unlikely to pose a serious threat to British infantry and artillery. Throughout June–August 1800, Wellesley advanced through Waugh's territory, his troops escalading forts in turn and capturing each one with "trifling loss". The forts generally offered little resistance due to their poor construction and design. Wellesley did not have sufficient troops to garrison each fort, and had to clear the surrounding area of insurgents before advancing to the next fort. On 31 July, he had "taken and destroyed Dhoondiah's baggage and six guns, and driven into the Malpoorba (where they were drowned) about five thousand people". Dhoondiah continued to retreat, but his forces were rapidly deserting, he had no infantry and due to the monsoon weather flooding river crossings he could no longer outpace the British advance. On 10 September, at the Battle of Conaghul, Wellesley personally led a charge of 1,400 British dragoons and Indian cavalry, in single line with no reserve, against Dhoondiah and his remaining 5,000 cavalry. Dhoondiah was killed during the clash, his body was discovered and taken to the British camp tied to a cannon. With this victory Wellesley's campaign was concluded, and British authority had been restored. Wellesley, with command of four regiments, had defeated Dhoondiah's larger rebel force, along with Dhoondiah himself, who was killed in the final battle. Wellesley then paid for the future upkeep of Dhoondiah's orphaned son. In September 1802, Wellesley learnt that he had been promoted to the rank of major-general. He had been gazetted on 29 April 1802, but the news took several months to reach him by sea. He remained at Mysore until November when he was sent to command an army in the Second Anglo-Maratha War. When he determined that a long defensive war would ruin his army, Wellesley decided to act boldly to defeat the numerically larger force of the Maratha Empire. With the logistic assembly of his army complete (24,000 men in total) he gave the order to break camp and attack the nearest Maratha fort on 8 August 1803. The fort surrendered on 12 August after an infantry attack had exploited an artillery-made breach in the wall. With the fort now in British control Wellesley was able to extend control southwards to the river Godavari. Splitting his army into two forces, to pursue and locate the main Marathas army, (the second force, commanded by Colonel Stevenson was far smaller) Wellesley was preparing to rejoin his forces on 24 September. His intelligence, however, reported the location of the Marathas' main army, between two rivers near Assaye. If he waited for the arrival of his second force, the Marathas would be able to mount a retreat, so Wellesley decided to launch an attack immediately. On 23 September, Wellesley led his forces over a ford in the river Kaitna and the Battle of Assaye commenced. After crossing the ford the infantry was reorganised into several lines and advanced against the Maratha infantry. Wellesley ordered his cavalry to exploit the flank of the Maratha army just near the village. During the battle Wellesley himself came under fire; two of his horses were shot from under him and he had to mount a third. At a crucial moment, Wellesley regrouped his forces and ordered Colonel Maxwell (later killed in the attack) to attack the eastern end of the Maratha position while Wellesley himself directed a renewed infantry attack against the centre. An officer in the attack wrote of the importance of Wellesley's personal leadership: "The General was in the thick of the action the whole time ... I never saw a man so cool and collected as he was ... though I can assure you, 'til our troops got the order to advance the fate of the day seemed doubtful ..." With some 6,000 Marathas killed or wounded, the enemy was routed, though Wellesley's force was in no condition to pursue. British casualties were heavy: the British losses were counted as 409 soldiers being killed out of which 164 were Europeans and the remaining 245 were Indian; a further 1,622 British soldiers were wounded and 26 soldiers were reported missing (the British casualty figures were taken from Wellesley's own despatch). Wellesley was troubled by the loss of men and remarked that he hoped "I should not like to see again such loss as I sustained on 23 September, even if attended by such gain". Years later, however, he remarked that Assaye, and not Waterloo, was the best battle he ever fought. Despite the damage done to the Maratha army, the battle did not end the war. A few months later in November, Wellesley attacked a larger force near Argaum, leading his army to victory again, with an astonishing 5,000 enemy dead at the cost of only 361 British casualties. A further successful attack at the fortress at Gawilghur, combined with the victory of General Lake at Delhi forced the Maratha to sign a peace settlement at Anjangaon (not concluded until a year later) called the Treaty of Surji-Anjangaon. Military historian Richard Holmes remarked that Wellesley's experiences in India had an important influence on his personality and military tactics, teaching him much about military matters that would prove vital to his success in the Peninsular War. These included a strong sense of discipline through drill and order, the use of diplomacy to gain allies, and the vital necessity for a secure supply line. He also established a high regard for the acquisition of intelligence through scouts and spies. His personal tastes also developed, including dressing himself in white trousers, a dark tunic, with Hessian boots and black cocked hat (that later became synonymous as his style). Wellesley had grown tired of his time in India, remarking "I have served as long in India as any man ought who can serve anywhere else". In June 1804 he applied for permission to return home and as a reward for his service in India he was made a Knight of the Bath in September. While in India, Wellesley had amassed a fortune of £42,000 (considerable at the time), consisting mainly of prize money from his campaign. When his brother's term as Governor-General of India ended in March 1805, the brothers returned together to England on HMS "Howe". Arthur, coincidentally, stopped on his voyage at the little island of Saint Helena and stayed in the same building to which Napoleon I would later be exiled. In September 1805, Major-General Wellesley was newly returned from his campaigns in India and was not yet particularly well known to the public. He reported to the office of the Secretary for War to request a new assignment. In the waiting room, he met Vice-Admiral Horatio Nelson, already a legendary figure after his victories at the Nile and Copenhagen, who was briefly in England after months chasing the French Toulon fleet to the West Indies and back. Some 30 years later, Wellington recalled a conversation that Nelson began with him which Wellesley found "almost all on his side in a style so vain and silly as to surprise and almost disgust me". Nelson left the room to inquire who the young general was and, on his return, switched to a very different tone, discussing the war, the state of the colonies, and the geopolitical situation as between equals. On this second discussion, Wellington recalled, "I don't know that I ever had a conversation that interested me more". This was the only time that the two men met; Nelson was killed at his great victory at Trafalgar just seven weeks later. Wellesley then served in the abortive Anglo-Russian expedition to north Germany in 1805, taking a brigade to Elbe. He then took a period of extended leave from the army and was elected as a Tory member of the British parliament for Rye in January 1806. A year later, he was elected MP for Newport on the Isle of Wight and was then appointed to serve as Chief Secretary for Ireland, under the Duke of Richmond. At the same time, he was made a privy counsellor. While in Ireland, he gave a verbal promise that the remaining Penal Laws would be enforced with great moderation, perhaps an indication of his later willingness to support Catholic Emancipation. Wellesley was in Ireland in May 1807 when he heard of the British expedition to Denmark. He decided to go, while maintaining his political appointments and was appointed to command an infantry brigade in the Second Battle of Copenhagen which took place in August. He fought at the Køge, during which the men under his command took 1,500 prisoners, with Wellesley later present during the surrender. By 30 September, he had returned to England and was raised to the rank of lieutenant general on 25 April 1808. In June 1808 he accepted the command of an expedition of 9,000 men. Preparing to sail for an attack on the Spanish colonies in South America (to assist the Latin American patriot Francisco de Miranda) his force was instead ordered to sail for Portugal, to take part in the Peninsular Campaign and rendezvous with 5,000 troops from Gibraltar. Ready for battle, Wellesley left Cork on 12 July 1808 to participate in the war against French forces in the Iberian Peninsula, with his skills as a commander tested and developed. According to the historian Robin Neillands, "Wellesley had by now acquired the experience on which his later successes were founded. He knew about command from the ground up, about the importance of logistics, about campaigning in a hostile environment. He enjoyed political influence and realised the need to maintain support at home. Above all, he had gained a clear idea of how, by setting attainable objectives and relying on his own force and abilities, a campaign could be fought and won." Wellesley defeated the French at the Battle of Roliça and the Battle of Vimeiro in 1808 but was superseded in command immediately after the latter battle. General Dalrymple then signed the controversial Convention of Sintra, which stipulated that the Royal Navy transport the French army out of Lisbon with all their loot, and insisted on the association of the only available government minister, Wellesley. Dalrymple and Wellesley were recalled to Britain to face a Court of Enquiry. Wellesley had agreed to sign the preliminary armistice, but had not signed the convention, and was cleared. Meanwhile, Napoleon himself entered Spain with his veteran troops to put down the revolt; the new commander of the British forces in the Peninsula, Sir John Moore, died during the Battle of Corunna in January 1809. Although overall the land war with France was not going well from a British perspective, the Peninsula was the one theatre where they, with the Portuguese, had provided strong resistance against France and her allies. This contrasted with the disastrous Walcheren expedition, which was typical of the mismanaged British operations of the time. Wellesley submitted a memorandum to Lord Castlereagh on the defence of Portugal. He stressed its mountainous frontiers and advocated Lisbon as the main base because the Royal Navy could help to defend it. Castlereagh and the cabinet approved the memo and appointed him head of all British forces in Portugal. Wellesley arrived in Lisbon on 22 April 1809 on board HMS "Surveillante", after narrowly escaping shipwreck. Reinforced, he took to the offensive. In the Second Battle of Porto he crossed the Douro river in a daylight "coup de main", and routed Marshal Soult's French troops in Porto. With Portugal secured, Wellesley advanced into Spain to unite with General Cuesta's forces. The combined allied force prepared for an assault on Marshal Victor's I Corps at Talavera, 23 July. Cuesta, however, was reluctant to agree, and was only persuaded to advance on the following day. The delay allowed the French to withdraw, but Cuesta sent his army headlong after Victor, and found himself faced by almost the entire French army in New Castile—Victor had been reinforced by the Toledo and Madrid garrisons. The Spanish retreated precipitously, necessitating the advance of two British divisions to cover their retreat. The next day, 27 July, at the Battle of Talavera the French advanced in three columns and were repulsed several times throughout the day by Wellesley, but at a heavy cost to the British force. In the aftermath Marshal Soult's army was discovered to be advancing south, threatening to cut Wellesley off from Portugal. Wellesley moved east on 3 August to block it, leaving 1,500 wounded in the care of the Spanish, intending to confront Soult before finding out that the French were in fact 30,000 strong. The British commander sent the Light Brigade on a dash to hold the bridge over the Tagus at Almaraz. With communications and supply from Lisbon secured for now, Wellesley considered joining with Cuesta again but found out that his Spanish ally had abandoned the British wounded to the French and was thoroughly uncooperative, promising and then refusing to supply the British forces, aggravating Wellesley and causing considerable friction between the British and their Spanish allies. The lack of supplies, coupled with the threat of French reinforcement (including the possible inclusion of Napoleon himself) in the spring, led to the British deciding to retreat into Portugal. Following his victory at Talavera, Wellesley was elevated to the Peerage of the United Kingdom on 26 August 1809 as Viscount Wellington of Talavera and of Wellington, in the County of Somerset, with the subsidiary title of Baron Douro of Wellesley. In 1810, a newly enlarged French army under Marshal André Masséna invaded Portugal. British opinion both at home and in the army was negative and there were suggestions that they must evacuate Portugal. Instead, Lord Wellington first slowed the French down at Buçaco; he then prevented them from taking the Lisbon Peninsula by the construction of his massive earthworks, the Lines of Torres Vedras, which had been assembled in complete secrecy and had flanks guarded by the Royal Navy. The baffled and starving French invasion forces retreated after six months. Wellington's pursuit was frustrated by a series of reverses inflicted by Marshal Ney in a much-lauded rear guard campaign. In 1811, Masséna returned toward Portugal to relieve Almeida; Wellington narrowly checked the French at the Battle of Fuentes de Onoro. Simultaneously, his subordinate, Viscount Beresford, fought Soult's 'Army of the South' to a mutual bloody standstill at the Battle of Albuera in May. Wellington was promoted to full general on 31 July for his services. The French abandoned Almeida, slipping away from British pursuit, but retained the twin Spanish fortresses of Ciudad Rodrigo and Badajoz, the 'Keys' guarding the roads through the mountain passes into Portugal. In 1812, Wellington finally captured Ciudad Rodrigo by a rapid movement as the French went into winter quarters, storming it before they could react. He then moved south quickly, besieged the fortress of Badajoz for a month and captured it during one bloody night. On viewing the aftermath of the Storming of Badajoz, Wellington lost his composure and cried at the sight of the bloody carnage in the breaches. His army now was a veteran British force reinforced by units of the retrained Portuguese army. Campaigning in Spain, he routed the French at the Battle of Salamanca, taking advantage of a minor French mispositioning. The victory liberated the Spanish capital of Madrid. As a reward, he was created Earl of Wellington, in the county of Somerset on 22 February 1812, and then Marquess of Wellington, in the said county on 18 August 1812, and given command of all Allied armies in Spain. Wellington attempted to take the vital fortress of Burgos, which linked Madrid to France. But failure, due in part to a lack of siege guns, forced him into a headlong retreat with the loss of over 2,000 casualties. The French abandoned Andalusia, and combined the troops of Soult and Marmont. Thus combined, the French outnumbered the British, putting the British forces in a precarious position. Wellington withdrew his army and, joined with the smaller corps commanded by Rowland Hill, began to retreat to Portugal. Marshal Soult declined to attack. In 1813, Wellington led a new offensive, this time against the French line of communications. He struck through the hills north of Burgos, the Tras os Montes, and switched his supply line from Portugal to Santander on Spain's north coast; this led to the French abandoning Madrid and Burgos. Continuing to outflank the French lines, Wellington caught up with and smashed the army of King Joseph Bonaparte in the Battle of Vitoria, for which he was promoted to field marshal on 21 June. He personally led a column against the French centre, while other columns commanded by Sir Thomas Graham, Rowland Hill and the Earl of Dalhousie looped around the French right and left (this battle became the subject of Beethoven's orchestral piece, the "Wellington's Victory" (Opus 91). The British troops broke ranks to loot the abandoned French wagons instead of pursuing the beaten foe. This gross abandonment of discipline caused an enraged Wellington to write in a famous dispatch to Earl Bathurst, "We have in the service the scum of the earth as common soldiers". Although later, when his temper had cooled, he extended his comment to praise the men under his command saying that though many of the men were, "the scum of the earth; it is really wonderful that we should have made them to the fine fellows they are". After taking the small fortresses of Pamplona, Wellington invested San Sebastián but was frustrated by the obstinate French garrison, losing 693 dead and 316 captured in a failed assault and suspending the siege at the end of July. Soult's relief attempt was blocked by the Spanish Army of Galicia at San Marcial, allowing the Allies to consolidate their position and tighten the ring around the city, which fell in September after a second spirited defence. Wellington then forced Soult's demoralised and battered army into a fighting retreat into France, punctuated by battles at the Pyrenees, Bidassoa and Nivelle. Wellington invaded southern France, winning at the Nive and Orthez. Wellington's final battle against his rival Soult occurred at Toulouse, where the Allied divisions were badly mauled storming the French redoubts, losing some 4,600 men. Despite this momentary victory, news arrived of Napoleon's defeat and abdication and Soult, seeing no reason to continue the fighting, agreed on a ceasefire with Wellington, allowing Soult to evacuate the city. Hailed as the conquering hero by the British, on 3 May 1814 Wellington was made Duke of Wellington, in the county of Somerset, together with the subsidiary title of Marquess Douro, in the said County. He received some recognition during his lifetime (the title of "Duque de Ciudad Rodrigo" and "Grandee of Spain") and the Spanish King Ferdinand VII allowed him to keep part of the works of art from the Royal Collection which he had recovered from the French. His equestrian portrait features prominently in the Monument to the Battle of Vitoria, in present-day Vitoria-Gasteiz. His popularity in Britain was due to his image and his appearance as well as to his military triumphs. His victory fitted well with the passion and intensity of the Romantic movement, with its emphasis on individuality. His personal style influenced the fashions on Britain at the time: his tall, lean figure and his plumed black hat and grand yet classic uniform and white trousers became very popular. In late 1814, the Prime Minister wanted him to take command in Canada and with the assignment of winning the War of 1812 against the United States. Wellesley replied that he would go to America, but he believed that he was needed more in Europe. He stated: He was appointed Ambassador to France, then took Lord Castlereagh's place as first plenipotentiary to the Congress of Vienna, where he strongly advocated allowing France to keep its place in the European balance of power. On 2 January 1815 the title of his Knighthood of the Bath was converted to Knight Grand Cross upon the expansion of that order. On 26 February 1815, Napoleon escaped from Elba and returned to France. He regained control of the country by May and faced a renewed alliance against him. Wellington left Vienna for what became known as the Waterloo Campaign. He arrived in the Netherlands to take command of the British-German army and their allied Dutch, all stationed alongside the Prussian forces of Generalfeldmarschall Gebhard Leberecht von Blücher. Napoleon's strategy was to isolate the Allied and Prussian armies and annihilate each one separately before the Austrians and Russians arrived. In doing so the vast superiority in numbers of the Coalition would be greatly diminished. He would then seek the possibility of peace with Austria and Russia. The French invaded the Netherlands, with Napoleon defeating the Prussians at Ligny, and Marshal Ney engaging indecisively with Wellington, at the Battle of Quatre Bras. The Prussians retreated 18 miles north to Wavre whilst Wellington's Anglo-Allied army withdrew 15 miles north to a site he had noted the previous year as favourable for a battle: the north ridge of a shallow valley on the Brussels road, just south of the small town of Waterloo. On 17 June there was torrential rain, which severely hampered movement and had a considerable effect the next day, 18 June, when the Battle of Waterloo was fought. This was the first time Wellington had encountered Napoleon; he commanded an Anglo-Dutch-German army that consisted of approximately 73,000 troops, 26,000 of whom were British. Approximately 30 percent of that 26,000 were Irish. The Battle of Waterloo commenced with a diversionary attack on Hougoumont by a division of French soldiers. After a barrage of 80 cannons, the first French infantry attack was launched by Comte D'Erlon's I Corps. D'Erlon's troops advanced through the Allied centre, resulting in Allied troops in front of the ridge retreating in disorder through the main position. D'Erlon's corps stormed the most fortified Allied position, La Haye Sainte, but failed to take it. An Allied division under Thomas Picton met the remainder of D'Erlon's corps head to head, engaging them in an infantry duel in which Picton fell. During this struggle Lord Uxbridge launched two of his cavalry brigades at the enemy, catching the French infantry off guard, driving them to the bottom of the slope, and capturing two French Imperial Eagles. The charge, however, over-reached itself, and the British cavalry, crushed by fresh French horsemen hurled at them by Napoleon, were driven back, suffering tremendous losses. A little before 16:00, Marshal Ney noted an apparent exodus from Wellington's centre. He mistook the movement of casualties to the rear for the beginnings of a retreat, and sought to exploit it. Ney at this time had few infantry reserves left, as most of the infantry had been committed either to the futile Hougoumont attack or to the defence of the French right. Ney therefore tried to break Wellington's centre with a cavalry charge alone. At about 16:30, the first Prussian corps arrived. Commanded by Freiherr von Bülow, IV Corps arrived as the French cavalry attack was in full spate. Bülow sent the 15th Brigade to link up with Wellington's left flank in the Frichermont–La Haie area while the brigade's horse artillery battery and additional brigade artillery deployed to its left in support. Napoleon sent Lobau's corps to intercept the rest of Bülow's IV Corps proceeding to Plancenoit. The 15th Brigade sent Lobau's corps into retreat to the Plancenoit area. Von Hiller's 16th Brigade also pushed forward with six battalions against Plancenoit. Napoleon had dispatched all eight battalions of the Young Guard to reinforce Lobau, who was now seriously pressed by the enemy. Napoleon's Young Guard counter-attacked and, after very hard fighting, secured Plancenoit, but were themselves counter-attacked and driven out. Napoleon then resorted to sending two battalions of the Middle and Old Guard into Plancenoit and after ferocious fighting they recaptured the village. The French cavalry attacked the British infantry squares many times, each at a heavy cost to the French but with few British casualties. Ney himself was displaced from his horse four times. Eventually, it became obvious, even to Ney, that cavalry alone were achieving little. Belatedly, he organised a combined-arms attack, using Bachelu's division and Tissot's regiment of Foy's division from Reille's II Corps plus those French cavalry that remained in a fit state to fight. This assault was directed along much the same route as the previous heavy cavalry attacks. Meanwhile, at approximately the same time as Ney's combined-arms assault on the centre-right of Wellington's line, Napoleon ordered Ney to capture La Haye Sainte at whatever the cost. Ney accomplished this with what was left of D'Erlon's corps soon after 18:00. Ney then moved horse artillery up towards Wellington's centre and began to destroy the infantry squares at short-range with canister. This all but destroyed the 27th (Inniskilling) Regiment, and the 30th and 73rd Regiments suffered such heavy losses that they had to combine to form a viable square. Wellington's centre was now on the verge of collapse and wide open to an attack from the French. Luckily for Wellington, Pirch I's and Zieten's corps of the Prussian Army were now at hand. Zieten's corps permitted the two fresh cavalry brigades of Vivian and Vandeleur on Wellington's extreme left to be moved and posted behind the depleted centre. Pirch I Corps then proceeded to support Bülow and together they regained possession of Plancenoit, and once more the Charleroi road was swept by Prussian round shot. The value of this reinforcement at this particular moment can hardly be overestimated. The French army now fiercely attacked the Coalition all along the line with the culminating point being reached when Napoleon sent forward the Imperial Guard at 19:30. The attack of the Imperial Guards was mounted by five battalions of the Middle Guard, and not by the Grenadiers or Chasseurs of the Old Guard. Marching through a hail of canister and skirmisher fire and severely outnumbered, the 3,000 or so Middle Guardsmen advanced to the west of La Haye Sainte and proceeded to separate into three distinct attack forces. One, consisting of two battalions of Grenadiers, defeated the Coalition's first line and marched on. Chassé's relatively fresh Dutch division was sent against them and Allied artillery fired into the victorious Grenadiers' flank. This still could not stop the Guard's advance, so Chassé ordered his first brigade to charge the outnumbered French, who faltered and broke. Further to the west, 1,500 British Foot Guards under Maitland were lying down to protect themselves from the French artillery. As two battalions of Chasseurs approached, the second prong of the Imperial Guard's attack, Maitland's guardsmen rose and devastated them with point-blank volleys. The Chasseurs deployed to counter-attack but began to waver. A bayonet charge by the Foot Guards then broke them. The third prong, a fresh Chasseur battalion, now came up in support. The British guardsmen retreated with these Chasseurs in pursuit, but the latter were halted as the 52nd Light Infantry wheeled in line onto their flank and poured a devastating fire into them and then charged. Under this onslaught, they too broke. The last of the Guard retreated headlong. A ripple of panic passed through the French lines as the astounding news spread: ""La Garde recule. Sauve qui peut!"" ("The Guard retreats. Save yourself if you can!"). Wellington then stood up in Copenhagen's stirrups, and waved his hat in the air to signal an advance of the Allied line just as the Prussians were overrunning the French positions to the east. What remained of the French army then abandoned the field in disorder. Wellington and Blücher met at the inn of La Belle Alliance, on the north–south road which bisected the battlefield, and it was agreed that the Prussians should pursue the retreating French army back to France. The Treaty of Paris was signed on 20 November 1815. After the victory, the Duke supported proposals that a medal be awarded to all British soldiers who participated in the Waterloo campaign, and on 28 June 1815 he wrote to the Duke of York suggesting: ... the expediency of giving to the non commissioned officers and soldiers engaged in the Battle of Waterloo a medal. I am convinced it would have the best effect in the army, and if the battle should settle our concerns, they will well deserve it.The Waterloo Medal was duly authorised and distributed to all ranks in 1816. Much historical discussion has been made about Napoleon's decision to send 33,000 troops under Marshal Grouchy to intercept the Prussians, but—having defeated Blücher at Ligny on 16 June and forced the Allies to retreat in divergent directions—Napoleon may have been strategically astute in a judgement that he would have been unable to beat the combined Allied forces on one battlefield. Wellington's comparable strategic gamble was to leave 17,000 troops and artillery, mostly Dutch, away at Halle, north-west of Mont-Saint-Jean, in case of a French advance up the Mons-Hal-Brussels road. The campaign led to numerous other controversies, especially concerning the Prussians. For example: Were Wellington's troop dispositions prior to Napoleon's invasion of the Netherlands sound? Did Wellington somehow mislead or betray Blücher by promising, then failing, to come directly to Blücher's aid at Ligny? Who deserved the lion's share of credit for the victory—Wellington or the Prussians? These and other such issues concerning Blücher's, Wellington's, and Napoleon's decisions during the campaign were the subject of a major strategic-level study by the famous Prussian political-military theorist Carl von Clausewitz, "Feldzug von 1815: Strategische Uebersicht des Feldzugs von 1815", English title: "The Campaign of 1815: Strategic Overview of the Campaign". Written c.1827, this study was Clausewitz's last such work and is widely considered to be the best example of Clausewitz's mature theories concerning such analyses. It attracted the attention of Wellington's staff, who prompted the Duke to write his only published essay on the campaign (other than his immediate, official after-action report, "The Waterloo Dispatch"), his 1842 "Memorandum on the Battle of Waterloo". While Wellington disputed Clausewitz on several points, the Prussian writer largely absolved Wellington of accusations levelled against him by nationalistic German axe-grinders. This exchange with Clausewitz was quite famous in Britain in the 19th century (it was heavily discussed in, for example, Chesney's "Waterloo Lectures" (1868).) It seems, however, to have been systematically ignored by British historians writing since 1914, which is odd considering that it was one of only two discussions of the battle that Wellington wrote. The explanation, unfortunately, is probably that it drew too much attention to the decisive German role in Wellington's victory—which Wellington himself was perfectly happy to acknowledge, but which became an awkward subject given Anglo-German hostilities in the 20th century. Wellington entered politics again when he was appointed Master-General of the Ordnance in the Tory government of Lord Liverpool on 26 December 1818. He also became Governor of Plymouth on 9 October 1819. He was appointed Commander-in-Chief of the British Army on 22 January 1827 and Constable of the Tower of London on 5 February 1827. Along with Robert Peel, Wellington became an increasingly influential member of the Tory party, and in 1828 he resigned as Commander-in-Chief and became Prime Minister. During his first seven months as prime minister, he chose not to live in the official residence at 10 Downing Street, finding it too small. He moved in only because his own home, Apsley House, required extensive renovations. During this time he was largely instrumental in the foundation of King's College London. On 20 January 1829 Wellington was appointed Lord Warden of the Cinque Ports. His term was marked by Catholic emancipation: the granting of almost full civil rights to Catholics in Great Britain and Ireland. The change was prompted by the landslide by-election win of Daniel O'Connell, an Irish Catholic proponent of emancipation, who was elected despite not being legally allowed to sit in Parliament. In the House of Lords, facing stiff opposition, Wellington spoke for Catholic Emancipation, and according to some sources, gave one of the best speeches of his career. He was born in Ireland and so had some understanding of the grievances of the Catholic communities there; as Chief Secretary, he had given an undertaking that the remaining Penal Laws would only be enforced as "mildly" as possible. In 1811 Catholic soldiers were given freedom of worship and 18 years later the Catholic Relief Act 1829 was passed with a majority of 105. Many Tories voted against the Act, and it passed only with the help of the Whigs. Wellington had threatened to resign as Prime Minister if the King (George IV) did not give his Royal Assent. The Earl of Winchilsea accused the Duke of "an insidious design for the infringement of our liberties and the introduction of Popery into every department of the State". Wellington responded by immediately challenging Winchilsea to a duel. On 21 March 1829, Wellington and Winchilsea met on Battersea fields. When the time came to fire, the Duke took aim and Winchilsea kept his arm down. The Duke fired wide to the right. Accounts differ as to whether he missed on purpose, an act known in dueling as a "delope". Wellington claimed he did. However, he was noted for his poor aim and reports more sympathetic to Winchilsea claimed he had aimed to kill. Winchilsea discharged his pistol into the air, a plan he and his second had almost certainly decided upon before the duel. Honour was saved and Winchilsea wrote Wellington an apology. The nickname "Iron Duke" originates from this period, when he experienced a high degree of personal and political unpopularity. Its repeated use in "Freeman's Journal" throughout June 1830 appears to bear reference to his resolute political will, with taints of disapproval from its Irish editors. During this time, Wellington was greeted by a hostile reaction from the crowds at the opening of the Liverpool and Manchester Railway. Wellington's government fell in 1830. In the summer and autumn of that year, a wave of riots swept the country. The Whigs had been out of power for most years since the 1770s, and saw political reform in response to the unrest as the key to their return. Wellington stuck to the Tory policy of no reform and no expansion of suffrage, and as a result, lost a vote of no confidence on 15 November 1830. The Whigs introduced the first Reform Bill while Wellington and the Tories worked to prevent its passage. The Whigs could not get the bill past its second reading in the British House of Commons, and the bill failed. An election followed in direct response and the Whigs were returned with a landslide majority. A second Reform Act was introduced and passed in the House of Commons but was defeated in the Tory-controlled House of Lords. Another wave of near insurrection swept the country. Wellington's residence at Apsley House was targeted by a mob of demonstrators on 27 April 1831 and again on 12 October, leaving his windows smashed. Iron shutters were installed in June 1832 to prevent further damage by crowds angry over rejection of the Reform Bill, which he strongly opposed. The Whig Government fell in 1832 and Wellington was unable to form a Tory Government partly because of a run on the Bank of England. This left King William IV no choice but to restore Earl Grey to the premiership. Eventually, the bill passed the House of Lords after the King threatened to fill that House with newly created Whig peers if it were not. Wellington was never reconciled to the change; when Parliament first met after the first election under the widened franchise, Wellington is reported to have said "I never saw so many shocking bad hats in my life". Wellington opposed the Jewish Civil Disabilities Repeal Bill, and he stated in Parliament on 1 August 1833 that England "is a Christian country and a Christian legislature, and that the effect of this measure would be to remove that peculiar character." The Bill was defeated 104 votes to 54. Wellington was gradually superseded as leader of the Tories by Robert Peel, while the party evolved into the Conservatives. When the Tories were returned to power in 1834, Wellington declined to become Prime Minister because he thought membership in Commons had become essential. The king reluctantly approved Peel, who was in Italy. So for three weeks in November and December 1834, Wellington acted as interim leader, taking the responsibilities of Prime Minister and most of the other ministries. In Peel's first cabinet (1834–1835), Wellington became Foreign Secretary, while in the second (1841–1846) he was a Minister without Portfolio and Leader of the House of Lords. Wellington was also re-appointed Commander-in-Chief of the British Army on 15 August 1842 following the resignation of Lord Hill. Wellington served as the leader of the Conservative party in the House of Lords, 1828–1846. Some historians have belittled him as a befuddled reactionary, but a consensus in the late 20th century depicts him as a shrewd operator who hid his cleverness behind the facade of a poorly informed old soldier. Wellington worked to transform the Lords from unstinting support of the Crown to an active player in political maneuvering, with a commitment to the landed aristocracy. He used his London residence as a venue for intimate dinners and private consultations, together with extensive correspondence that kept him in close touch with party leaders in the Commons, and the main persona in the Lords. He gave public rhetorical support to Ultra-Tory anti-reform positions, but then deftly changed positions toward the party's center, especially when Peel needed support from the upper house. Wellington's success was based on the 44 Elected peers from Scotland and Ireland, whose election he controlled. Wellesley married Kitty Pakenham in Dublin on 10 April 1806. The marriage proved unsatisfactory and the two spent years apart while he was campaigning. Kitty grew depressed, and Wellesley pursued other sexual and romantic partners. They had Arthur in 1807 and Charles in 1808. The couple lived apart most of the time and occupied separate rooms when they were together. Her brother Edward Pakenham served under Wellesley throughout the Peninsular War, and Wellesley's regard for him helped to smooth his relations with Kitty, until Pakenham's death at the Battle of New Orleans in 1815. Wellington retired from political life in 1846, although he remained Commander-in-Chief, and returned briefly to the spotlight in 1848 when he helped organise a military force to protect London during that year of European revolution. The Conservative Party had split over the Repeal of the Corn Laws in 1846, with Wellington and most of the former Cabinet still supporting Peel, but most of the MPs led by Lord Derby supporting a protectionist stance. Early in 1852 Wellington, by then very deaf, gave Derby's first government its nickname by shouting "Who? Who?" as the list of inexperienced Cabinet Ministers was read out in the House of Lords. He became Chief Ranger and Keeper of Hyde Park and St. James's Park on 31 August 1850. He was also colonel of the 33rd Regiment of Foot from 1 February 1806 and colonel of the Grenadier Guards from 22 January 1827. Kitty died of cancer in 1831; despite their generally unhappy relations, which had led to an effective separation, Wellington was said to have been greatly saddened by her death, his one comfort being that after "half a lifetime together, they had come to understand each other at the end". He had found consolation for his unhappy marriage in his warm friendship with the diarist Harriet Arbuthnot, wife of his colleague Charles Arbuthnot. Harriet's death in the cholera epidemic of 1834 was almost as great a blow to Wellington as it was to her husband. The two widowers spent their last years together at Apsley House. Wellington died at Walmer Castle in Deal on 14 September 1852. This was his residence as Lord Warden of the Cinque Ports. Walmer Castle was said to have been his favourite residence. He was found to be unwell on that morning and was aided from his military campaign bed (the same one he used throughout his historic military career) and seated in his chair where he died. His death was recorded as being due to the after-effects of a stroke culminating in a series of seizures. He was aged 83. Although in life he hated travelling by rail (after witnessing the death of William Huskisson, one of the first railway accident casualties), his body was then taken by train to London, where he was given a state funeral – one of only a handful of British subjects to be honoured in that way (other examples are Lord Nelson and Sir Winston Churchill). The funeral took place on 18 November 1852. Before the funeral, the Duke's body lay in state at the Royal Hospital, Chelsea. Members of the royal family, including Queen Victoria, the Prince Consort, the Prince of Wales, and the Princess Royal, visited to pay their respects. When viewing opened to the public, crowds thronged to visit and several people were killed in the crush. At his funeral there was hardly any space to stand because of the number of people attending, and the effusive praise given him in Tennyson's "Ode on the Death of the Duke of Wellington" attests to his stature at the time of his death. He was buried in a sarcophagus of luxulyanite in St Paul's Cathedral next to Lord Nelson. A bronze memorial was sculpted by Alfred Stevens, and features two intricate supports: "Truth tearing the tongue out of the mouth of False-hood", and "Valour trampling Cowardice underfoot". Stevens did not live to see it placed in its home under one of the great arches of the Cathedral. Wellington's casket was decorated with banners which were made for his funeral procession. Originally, there was one from Prussia, which was removed during World War I and never reinstated. In the procession, the "Great Banner" was carried by General Sir James Charles Chatterton of the 4th Dragoon Guards on the orders of Queen Victoria. Most of the book "A Biographical Sketch of the Military and Political Career of the Late Duke of Wellington" by Weymouth newspaper proprietor Joseph Drew is a detailed contemporary account of his death, lying in state and funeral. After his death, Irish and English newspapers disputed whether Wellington had been born an Irishman or an Englishman. In 2002, he was number 15 in the BBC's poll of the 100 Greatest Britons. Owing to its links with Wellington, as the former commanding officer and colonel of the regiment, the title "33rd (The Duke of Wellington's) Regiment" was granted to the 33rd Regiment of Foot, on 18 June 1853 (the 38th anniversary of the Battle of Waterloo) by Queen Victoria. Wellington's battle record is exemplary; he participated in some 60 battles during the course of his military career. Wellington always rose early; he "couldn't bear to lie awake in bed", even if the army was not on the march. Even when he returned to civilian life after 1815, he slept in a camp bed, reflecting his lack of regard for creature comforts; it remains on display in Walmer Castle. General Miguel de Álava complained that Wellington said so often that the army would march "at daybreak" and dine on "cold meat", that he began to dread those two phrases. While on campaign, he seldom ate anything between breakfast and dinner. During the retreat to Portugal in 1811, he subsisted on "cold meat and bread", to the despair of his staff who dined with him. He was, however, renowned for the quality of the wine that he drank and served, often drinking a bottle with his dinner (not a great quantity by the standards of his day). He rarely showed emotion in public, and often appeared condescending to those less competent or less well-born than himself (which was nearly everyone). However, Álava was a witness to an incident just before the Battle of Salamanca. Wellington was eating a chicken leg while observing the manoeuvres of the French army through a spyglass. He spotted an overextension in the French left flank, and realised that he could launch a successful attack there. He threw the drumstick in the air and shouted ""Les français sont perdus!"" ("The French are lost!"). After the Battle of Toulouse, an aide brought him the news of Napoleon's abdication, and Wellington broke into an impromptu flamenco dance, spinning around on his heels and clicking his fingers. Military historian Charles Dalton recorded that, after a hard-fought battle in Spain, a young officer made the comment, "I am going to dine with Wellington tonight", which was overheard by the Duke as he rode by. "Give me at least the prefix of Mr. before my name," Wellington said. "My Lord," replied the officer, "we do not speak of Mr. Caesar or Mr. Alexander, so why should I speak of Mr. Wellington?" His stern countenance and iron-handed discipline were renowned; he was said to disapprove of soldiers cheering as "too nearly an expression of opinion." Nevertheless, Wellington cared for his men; he refused to pursue the French after the battles of Porto and Salamanca, foreseeing an inevitable cost to his army in chasing a diminished enemy through rough terrain. The only time that he ever showed grief in public was after the storming of Badajoz; he cried at the sight of the British dead in the breaches. In this context, his famous dispatch after the Battle of Vitoria, calling them the "scum of the earth," can be seen to be fuelled as much by disappointment at their breaking ranks as by anger. He expressed his grief openly the night after Waterloo before his personal physician, and later with his family; unwilling to be congratulated for his victory, he broke down in tears, his fighting spirit diminished by the high cost of the battle and great personal loss. Wellington's soldier servant, a gruff German called Beckerman, and his long-serving valet, James Kendall, who served him for 25 years and was with him when he died, were both devoted to him. (A story that he never spoke to his servants and preferred instead to write his orders on a note pad on his dressing table in fact probably refers to his son, the 2nd Duke. It was recorded by the 3rd Duke's niece, Viva Seton Montgomerie (1879-1959), as being an anecdote she heard from an old retainer, Charles Holman who was said greatly to resemble Napoleon. Holman is recorded as a servant of the Dukes of Wellington from 1871 to 1905). Following an incident when, as Master-General of the Ordnance he had been close to a large explosion, Wellington began to experience deafness and other ear-related problems. In 1822, he had an operation to improve the hearing of the left ear. The result, however, was that he became permanently deaf on that side. It is claimed that he was "never quite well afterwards". Wellington had a "vigorous sexual appetite" and many amorous liaisons during his marriage to Kitty. He enjoyed the company of intellectual and attractive women for many decades, particularly after the Battle of Waterloo and his subsequent ambassadorial position in Paris. The British press lampooned this side of the national hero. In 1824, one liaison came back to haunt him, when Wellington received a letter from a publisher offering to refrain from issuing an edition of the rather racy memoirs of one of his mistresses Harriette Wilson, in exchange for financial consideration. It is said that the Duke promptly returned the letter, after scrawling across it, "Publish and be damned". However, Hibbert notes in his biography that the letter can be found among the Duke's papers, with nothing written on it. It is certain that Wellington "did" reply, and the tone of a further letter from the publisher, quoted by Longford, suggests that he had refused in the strongest language to submit to blackmail. He was also a remarkably practical man who spoke concisely. In 1851, it was discovered that there were a great many sparrows flying about in the Crystal Palace just before the Great Exhibition was to open. His advice to Queen Victoria was "Sparrowhawks, ma'am". Wellington has often been portrayed as a defensive general, even though many, perhaps most, of his battles were offensive (Argaum, Assaye, Oporto, Salamanca, Vitoria, Toulouse). However, for most of the Peninsular War, where he earned his fame, his army lacked the numbers for a strategically offensive posture. This commonly used nickname originally related to his consistent political resolve rather than to any particular incident. In various cases its editorial use appears to be disparaging. It is likely that its use became more widespread after an incident in 1832 in which he installed metal shutters to prevent rioters breaking windows at Apsley House. The term may have been made increasingly popular by "Punch" cartoons published in 1844–45. Wellington had various other nicknames: In addition:
https://en.wikipedia.org/wiki?curid=8474
Disk operating system A disk operating system (abbreviated DOS) is a computer operating system that resides on and can use a disk storage device, such as a floppy disk, hard disk drive, or optical disc. A disk operating system must provide a file system for organizing, reading, and writing files on the storage disk. Strictly speaking, this definition does not apply to current generations of operating systems, such as versions of Microsoft Windows in use, and is more appropriately used only for older generations of operating systems. Disk operating systems were available for mainframes, minicomputers, microprocessors and home computers and were usually loaded from the disks themselves as part of the boot process. In the early days of computers, there were no disk drives, floppy disks or modern flash storage devices. Early storage devices such as delay lines, core memories, punched cards, punched tape, magnetic tape, and magnetic drums were used instead. And in the early days of microcomputers and home computers, paper tape or audio cassette tape (see Kansas City standard) or nothing were used instead. In the latter case, program and data entry was done at front panel switches directly into memory or through a computer terminal / keyboard, sometimes controlled by a BASIC interpreter in ROM; when power was turned off any information was lost. In the early 1960s, as disk drives became larger and more affordable, various mainframe and minicomputer vendors began introducing disk operating systems and modifying existing operating systems to exploit disks. Both hard disks and floppy disk drives require software to manage rapid access to block storage of sequential and other data. For most microcomputers, a disk drive of any kind was an optional peripheral; systems could be used with a tape drive or booted without a storage device at all. The disk operating system component of the operating system was only needed when a disk drive was used. By the time IBM announced the System/360 mainframes, the concept of a disk operating system was well established. Although IBM did offer Basic Programming Support (BPS/360) and TOS/360 for small systems, they were out of the mainstream and most customers used either DOS/360 or OS/360. Most home and personal computers of the late 1970s and 1980s used a disk operating system, most often with "DOS" in the name and simply referred to as "DOS" within their respective communities: CBM DOS for Commodore 8-bit systems, Atari DOS for the Atari 8-bit family, TRS-DOS for the TRS-80, and Apple DOS for the Apple II, and MS-DOS for IBM PC compatibles. Usually, a disk operating system was loaded from a disk. Among the exceptions were Commodore, whose DOS resided on ROM chips in the disk drives. The Lt. Kernal hard disk subsystem for the Commodore 64 and Commodore 128 models stored its DOS on the disk, as is the case with modern systems, and loaded the DOS into RAM at boot time; the British BBC Micro's optional Disc Filing System, DFS, offered as a kit with a disk controller chip, a ROM chip, and a handful of logic chips, to be installed inside the computer. Some disk operating systems were the operating systems for the entire computer system.
https://en.wikipedia.org/wiki?curid=8476
Doublespeak Doublespeak is language that deliberately obscures, disguises, distorts, or reverses the meaning of words. Doublespeak may take the form of euphemisms (e.g. "downsizing" for layoffs and "servicing the target" for bombing), in which case it is primarily meant to make the truth sound more palatable. It may also refer to intentional ambiguity in language or to actual inversions of meaning. In such cases, doublespeak disguises the nature of the truth. Doublespeak is most closely associated with political language. The word is comparable to George Orwell's Newspeak and Doublethink as used in his book "Nineteen Eighty-Four", though the term Doublespeak does not appear there. The term "doublespeak" originates in George Orwell's book "1984" (Nineteen Eighty-Four). Although the term is not used in the book, it is a close relative of two of the book's central concepts, "doublethink" and "Newspeak". Another variant, "doubletalk", also referring to deliberately ambiguous speech, did exist at the time Orwell wrote his book, but the usage of "doublespeak", as well as of "doubletalk", in the sense emphasizing ambiguity clearly postdates the publication of "Nineteen Eighty-Four". Parallels have also been drawn between doublespeak and Orwell's classic essay "Politics and the English Language", which discusses the distortion of language for political purposes. In it he observes that political language serves to distort and obfuscate reality. Orwell's description of political speech is extremely similar to the contemporary definition of doublespeak: The writer Edward S. Herman cited what he saw as examples of doublespeak and doublethink in modern society. Herman describes in his book "Beyond Hypocrisy" the principal characteristics of doublespeak: Terrence P. Moran of the US National Council of Teachers of English has compared the use of doublespeak in the mass media to a set of laboratory experiments conducted on rats. In the experiment, a sample of rats was first deprived of food, before one group was fed sugar and water and the other group a saccharin solution. Both groups exhibited behavior indicating that their hunger was satisfied, but rats in the second group (which were fed saccharin solution) died of malnutrition. Moran parallels doublespeak's effects on the social masses to the second group of rats upon whom an illusionary effect was created. He also highlights the structural nature of doublespeak, and notes that the mass media and other social institutions employ an active, downward-aimed approach in managing the opinions of society at large: Doublespeak also has close connections with some contemporary theories. Edward S. Herman and Noam Chomsky comment in their book "" that Orwellian doublespeak is an important component of the manipulation of the English language in American media, through a process called "dichotomization," a component of media propaganda involving "deeply embedded double standards in the reporting of news." For example, the use of state funds by the poor and financially needy is commonly referred to as "social welfare" or "handouts," which the "coddled" poor "take advantage of." These terms, however, are not as often applied to other beneficiaries of government spending such as military spending. The National Council of Teachers of English (NCTE) Committee on Public Doublespeak was formed in 1971, in the midst of the Watergate scandal. It was at a point when there was widespread skepticism about the degree of truth which characterized relationships between the public and the worlds of politics, the military, and business. NCTE passed two resolutions. One called for the Council to find means to study dishonest and inhumane uses of language and literature by advertisers, to bring offenses to public attention, and to propose classroom techniques for preparing children to cope with commercial propaganda. The other called for the Council to find means to study the relationships between language and public policy and to track, publicize, and combat semantic distortion by public officials, candidates for office, political commentators, and all others whose language is transmitted through the mass media. The two resolutions were accomplished by forming NCTE's Committee on Public Doublespeak, a body which has made significant contributions in describing the need for reform where clarity in communication has been deliberately distorted. Hugh Rank helped form the Doublespeak committee in 1971 and was its first chairman. Under his editorship, the committee produced a book called "Language and Public Policy" (1974), with the aim of informing readers of the extensive scope of doublespeak being used to deliberately mislead and deceive the audience. He highlighted the deliberate public misuses of language and provided strategies for countering doublespeak by focusing on educating people in the English language so as to help them identify when doublespeak is being put into play. He was also the founder of the Intensify/Downplay pattern that has been widely used to identify instances of doublespeak being used. Daniel Dieterich served as the second chairman of the Doublespeak committee after Hugh Rank in 1975. He served as editor of its second publication, "Teaching about Doublespeak" (1976), which carried forward the Committee's charge to inform teachers of ways of teaching students how to recognize and combat language designed to mislead and misinform. William D. Lutz has served as the third chairman of the Doublespeak Committee since 1975. In 1989, both his own book "Doublespeak" and, under his editorship, the committee's third book, "Beyond Nineteen Eighty-Four", were published. "Beyond Nineteen Eighty-Four" consists of 220 pages and eighteen articles contributed by long-time Committee members and others whose bodies of work have contributed to public understanding about language, as well as a bibliography of 103 sources on doublespeak. Lutz was also the former editor of the now defunct "Quarterly Review of Doublespeak", which examined the use of vocabulary by public officials to obscure the underlying meaning of what they tell the public. Lutz is one of the main contributors to the committee as well as promoting the term "doublespeak" to a mass audience to inform them of its deceptive qualities. He mentions: A. M. Tibbetts is one of the main critics of the NCTE, claiming that "the Committee's very approach to the misuse of language and what it calls 'doublespeak' may in the long run limit its usefulness". According to him, the "Committee's use of Orwell is both confused and confusing". The NCTE's publications resonate with George Orwell's name, and allusions to him abound in statements on doublespeak; for example, the committee quoted Orwell's remark that "language is often used as an instrument of social control" in "Language and Public Policy". Tibbetts argues that such a relation between NCTE and Orwell's work is contradictory because "the Committee's attitude towards language is liberal, even radical" while "Orwell's attitude was conservative, even reactionary". He also criticizes the Committee's "continual attack" against linguistic "purism". Whereas in the early days of the practice it was considered wrong to construct words to disguise meaning, this is now an established practice. There is a thriving industry in constructing words without explicit meaning but with particular connotations for new products or companies. Doublespeak is also employed in the field of politics. Advertisers can use doublespeak to mask their commercial intent from users, as users' defenses against advertising become more entrenched. Some are attempting to counter this technique with a number of systems offering diverse views and information to highlight the manipulative and dishonest methods that advertisers employ. According to Jacques Ellul, "the aim is not to even modify people’s ideas on a given subject, rather, it is to achieve conformity in the way that people act." He demonstrates this view by offering an example from drug advertising. Use of doublespeak in advertisements resulted in aspirin production rates rising by almost 50 percent from over 23 million pounds in 1960 to over 35 million pounds in 1970. Charles Weingartner, one of the founding members of the NCTE committee on Public Doublespeak mentioned: "people do not know enough about the subject (the reality) to recognize that the language being used conceals, distorts, misleads. Teachers of English should teach our students that words are not things, but verbal tokens or signs of things that should finally be carried back to the things that they stand for to be verified." According to William Lutz: "Only by teaching respect and love for the language can teachers of English instill in students the sense of outrage they should experience when they encounter doublespeak." "Only by using language well will we come to appreciate the perversion inherent in doublespeak." This pattern was formulated by Hugh Rank and is a simple tool designed to teach some basic patterns of persuasion used in political propaganda and commercial advertising. The function of the intensify/downplay pattern is not to dictate what should be discussed but to encourage coherent thought and systematic organization. The pattern works in two ways: intensifying and downplaying. All people intensify and this is done via repetition, association and composition. Downplaying is commonly done via omission, diversion and confusion as they communicate in words, gestures, numbers, et cetera. Individuals can better cope with organized persuasion by recognizing the common ways whereby communication is intensified or downplayed, so as to counter doublespeak. Doublespeak is often used by politicians for the advancement of their agenda. The Doublespeak Award is an "ironic tribute to public speakers who have perpetuated language that is grossly deceptive, evasive, euphemistic, confusing, or self-centered." It has been issued by the National Council of Teachers of English (NCTE) since 1974. The recipients of the Doublespeak Award are usually politicians, national administration or departments. An example of this is the United States Department of Defense, which won the award three times in 1991, 1993, and 2001. For the 1991 award, the United States Department of Defense "swept the first six places in the Doublespeak top ten" for using euphemisms like "servicing the target" (bombing) and "force packages" (warplanes). Among the other phrases in contention were "difficult exercise in labor relations", meaning a strike, and "meaningful downturn in aggregate output", an attempt to avoid saying the word "recession". Doublespeak, particularly when exaggerated, can be used as a device in satirical comedy and social commentary to ironically parody political or bureaucratic establishments intent on obfuscation or prevarication. The television series "Yes Minister" is notable for its use of this device. Oscar Wilde was an early proponent of this device and a significant influence on Orwell.
https://en.wikipedia.org/wiki?curid=8478
Dressed to Kill (1980 film) Dressed to Kill is a 1980 American neo-noir slasher film written and directed by Brian De Palma. Starring Michael Caine, Angie Dickinson, Nancy Allen, and Keith Gordon, the film depicts the events leading up to the murder of a New York City housewife (Dickinson) before following a prostitute (Allen) who witnesses the crime. It contains several direct references to Alfred Hitchcock's 1960 film "Psycho", such as a man dressing as a woman to commit murders, significant shower scenes, and the murder of the female lead early in the picture. Released in the summer of 1980, "Dressed to Kill" was a box office hit in the United States, grossing over $30 million. It received largely favorable reviews, and critic David Denby of "New York Magazine" proclaimed it "the first great American movie of the '80s." Allen received both a Golden Globe Award nomination for New Star of the Year, as well as a Golden Raspberry Award for Worst Actress, while Dickinson received a Saturn Award for Best Actress for her performance. Kate Miller is a sexually frustrated housewife who is in therapy with New York City psychiatrist Dr. Robert Elliott. During an appointment, Kate attempts to seduce him, but Elliott rejects her advances. Kate goes to the Metropolitan Museum of Art where she has an unexpected flirtation with a mysterious stranger. Kate and the stranger stalk each other through the museum until they finally wind up outside, where Kate joins him in a taxi. They begin to have sex and continue at his apartment. Hours later, Kate awakens and decides to discreetly leave while the man, Warren Lockman, is asleep. Kate sits at his desk to leave him a note and finds a document indicating that Warren has contracted a sexually transmitted disease. Mortified, she leaves the apartment. In her haste, she forgets her wedding ring on the nightstand, and she returns to retrieve it. The elevator doors open on the figure of a tall, blond woman in dark sunglasses wielding a straight razor. Kate is violently stabbed to death in the elevator. A high-priced call girl, Liz Blake, happens upon the body. She catches a glimpse of the killer in the elevator's convex mirror, and subsequently becomes both the prime suspect and the killer's next target. Dr. Elliott receives a bizarre message on his answering machine from "Bobbi", a transgender patient. Bobbi taunts the psychiatrist for breaking off their therapy sessions, apparently because Elliott refuses to sign the necessary papers for Bobbi to get sex reassignment surgery. Elliott tries to convince Dr. Levy, the patient's new doctor, that Bobbi is a danger to herself and others. Police Detective Marino is skeptical about Liz's story, partly because of her profession, so Liz joins forces with Kate's revenge-minded son Peter to find the killer. Peter, an inventor, uses a series of homemade listening devices and time-lapse cameras to track patients leaving Elliott's office. They catch Bobbi on camera, and soon Liz is being stalked by a tall blonde in sunglasses. Several attempts are subsequently made on Liz's life. One, in the New York City Subway, is thwarted by Peter, who sprays Bobbi with homemade Mace. Liz and Peter scheme to learn Bobbi's birth name by getting inside Dr. Elliott's office. Liz baits the therapist by stripping to lingerie and coming on to him, distracting him long enough to make a brief exit and leaf through his appointment book. Peter is watching through the window when a blonde pulls him away. When Liz returns, a blonde with a razor confronts her; the blonde outside shoots and wounds the blonde inside, the wig falls off, and it is Dr. Elliott, revealing that he is also Bobbi. The blonde who shot Bobbi is actually a female police officer, revealing herself to be the blonde who has been trailing Liz. Elliott is arrested and placed in an insane asylum. Dr. Levy explains later to Liz that Elliott wanted to be a woman, but his male side would not allow him to go through with the operation. Whenever a woman sexually aroused Elliott, Bobbi, representing the unstable, female side of the doctor's personality, became threatened to the point that it finally became murderous. When Dr. Levy realized this through his last conversation with Elliott, he called the police on the spot, who then, with his help, did their duty. In a final sequence, Elliott escapes from the asylum and slashes Liz's throat in a bloody act of vengeance. She wakes up screaming, Peter rushing to her side, realizing that it was just a nightmare. The nude body in the opening scene, taking place in a shower, was not that of Angie Dickinson, but of 1977 "Penthouse" Pet of the Year model Victoria Lynn Johnson. De Palma originally wanted Norwegian actress Liv Ullmann to play Kate Miller, but she declined because of the violence. The role then went to Angie Dickinson. Sean Connery was offered the role of Robert Elliot and was enthusiastic about it, but declined on account of previous commitments. Connery later worked with De Palma on the 1987 Oscar-winning adaptation of "The Untouchables". De Palma called the elevator killing the best murder scene he has ever done. "Dressed to Kill" premiered in Los Angeles and New York City on July 25, 1980. The film grossed $3,416,000 in its opening weekend from 591 theatres and improved its gross the following weekend with $3,640,000 from 596 theatres. It grossed a total of $31.9 million at the U.S. box office, and was the 21st highest-grossing film of the year. "Dressed to Kill" currently holds an 81% "fresh" rating on Rotten Tomatoes based on 48 reviews, with an average rating of 6.63/10. The consensus states, "With arresting visuals and an engrossingly lurid mystery, "Dressed to Kill" stylishly encapsulates writer-director Brian De Palma's signature strengths." Roger Ebert awarded the film three stars out of four, stating "the museum sequence is brilliant" and adding: ""Dressed to Kill" is an exercise in style, not narrative; it would rather look and feel like a thriller than make sense, but DePalma has so much fun with the conventions of the thriller that we forgive him and go along." Gene Siskel also gave it three stars out of four, writing that there were scenes "that are as exciting and as stylish as any ever put on film. Unfortunately, a good chunk of the film is a whodunit, and its mystery is so easy to solve that we merely end up watching the film's visual pyrotechnics at a distance, never getting all that involved." Vincent Canby of "The New York Times]" called the film "witty, romantic," and "very funny, which helps to defuse the effect of the graphically photographed violence. In addition, the film is, in its own inside-out way, peculiarly moral." His review added that "The performers are excellent, especially Miss Dickinson." "Variety" declared "Despite some major structural weaknesses, the cannily manipulated combination of mystery, gore and kinky sex adds up to a slick commercial package that stands to draw some rich blood money." David Denby of "New York Magazine" proclaimed the film "the first great American movie of the '80s." Sheila Benson of the "Los Angeles Times" wrote "The brilliance of "Dressed to Kill" is apparent within seconds of its opening gliding shot; it is a sustained work of terror—elegant, sensual, erotic, bloody, a directorial tour de force." Pauline Kael of "The New Yorker" stated of De Palma that "his timing is so great that when he wants you to feel something he gets you every time. His thriller technique, constantly refined, has become insidious, jewelled. It's hardly possible to find a point at which you could tear yourself away from this picture." Gary Arnold of "The Washington Post" wrote, "This elegant new murder thriller promises to revive the lagging summer box office and enhance De Palma's reputation as the most exciting and distinctive manipulator of suspense since Alfred Hitchcock." In his movie guide, Leonard Maltin gave the film 3 1/2 stars out of four, calling it a "High-tension melodrama", and stating "De Palma works on viewers' emotions, not logic, and maintains a fever pitch from start to finish." He also praised Pino Donaggio's "chilling music score." John Simon of the "National Review", after taking note of the two-page advertisements full of superlatives in "The New York Times", wrote "What "Dressed to Kill" dispenses liberally, however, is sophomoric soft-core pornography, vulgar manipulation of the emotions for mere sensation, salacious but inept dialogue that is a cross between comic-strip Freudianism and sniggering double entendres, and a plot line so full of holes to be at best a dotted line". Two versions of the film exist in North America, an R-rated version and an unrated version. The unrated version is around 30 seconds longer and shows more pubic hair in the shower scene, more blood in the elevator scene (including a close-up shot of the killer slitting Kate's throat), and more explicit dialogue from Liz during the scene in Elliott's office. These scenes were trimmed when the MPAA originally gave the film an X rating. The film is currently owned by Metro-Goldwyn-Mayer (successor to Orion Pictures, who bought Filmways and American International Pictures in 1982). The film saw a 1984 VHS release by Warner Home Video, and later another VHS release by Goodtimes under licence from Orion. In 2002, MGM released the film on DVD, including special features. In 2010, MGM released both R-rated and unrated versions on DVD and Blu-ray. The Criterion Collection released a deluxe Blu-ray edition of the film on September 8, 2015.
https://en.wikipedia.org/wiki?curid=8481
Diesel cycle The Diesel cycle is a combustion process of a reciprocating internal combustion engine. In it, fuel is ignited by heat generated during the compression of air in the combustion chamber, into which fuel is then injected. This is in contrast to igniting the fuel-air mixture with a spark plug as in the Otto cycle (four-stroke/petrol) engine. Diesel engines are used in aircraft, automobiles, power generation, diesel-electric locomotives, and both surface ships and submarines. The Diesel cycle is assumed to have constant pressure during the initial part of the combustion phase (formula_1 to formula_2 in the diagram, below). This is an idealized mathematical model: real physical diesels do have an increase in pressure during this period, but it is less pronounced than in the Otto cycle. In contrast, the idealized Otto cycle of a gasoline engine approximates a constant volume process during that phase. The image shows a p-V diagram for the ideal Diesel cycle; where formula_3 is pressure and V the volume or formula_4 the specific volume if the process is placed on a unit mass basis. The "idealized" Diesel cycle assumes an ideal gas and ignores combustion chemistry, exhaust- and recharge procedures and simply follows four distinct processes: The Diesel engine is a heat engine: it converts heat into work. During the bottom isentropic processes (blue), energy is transferred into the system in the form of work formula_5, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant pressure (red, isobaric) process, energy enters the system as heat formula_6. During the top isentropic processes (yellow), energy is transferred out of the system in the form of formula_7, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant volume (green, isochoric) process, some of energy flows out of the system as heat through the right depressurizing process formula_8. The work that leaves the system is equal to the work that enters the system plus the difference between the heat added to the system and the heat that leaves the system; in other words, net gain of work is equal to the difference between the heat added to the system and the heat that leaves the system. The net work produced is also represented by the area enclosed by the cycle on the P-V diagram. The net work is produced per cycle and is also called the useful work, as it can be turned to other useful types of energy and propel a vehicle (kinetic energy) or produce electrical energy. The summation of many such cycles per unit of time is called the developed power. The formula_7 is also called the gross work, some of which is used in the next cycle of the engine to compress the next charge of air The maximum thermal efficiency of a Diesel cycle is dependent on the compression ratio and the cut-off ratio. It has the following formula under cold air standard analysis: formula_16 where The cut-off ratio can be expressed in terms of temperature as shown below: formula_26 can be approximated to the flame temperature of the fuel used. The flame temperature can be approximated to the adiabatic flame temperature of the fuel with corresponding air-to-fuel ratio and compression pressure, formula_27. formula_28 can be approximated to the inlet air temperature. This formula only gives the ideal thermal efficiency. The actual thermal efficiency will be significantly lower due to heat and friction losses. The formula is more complex than the Otto cycle (petrol/gasoline engine) relation that has the following formula: formula_29 The additional complexity for the Diesel formula comes around since the heat addition is at constant pressure and the heat rejection is at constant volume. The Otto cycle by comparison has both the heat addition and rejection at constant volume. Comparing the two formulae it can be seen that for a given compression ratio (), the "ideal" Otto cycle will be more efficient. However, a "real" diesel engine will be more efficient overall since it will have the ability to operate at higher compression ratios. If a petrol engine were to have the same compression ratio, then knocking (self-ignition) would occur and this would severely reduce the efficiency, whereas in a diesel engine, the self ignition is the desired behavior. Additionally, both of these cycles are only idealizations, and the actual behavior does not divide as clearly or sharply. Furthermore, the ideal Otto cycle formula stated above does not include throttling losses, which do not apply to diesel engines. Diesel engines have the lowest specific fuel consumption of any large internal combustion engine employing a single cycle, 0.26 lb/hp·h (0.16 kg/kWh) for very large marine engines (combined cycle power plants are more efficient, but employ two engines rather than one). Two-stroke diesels with high pressure forced induction, particularly turbocharging, make up a large percentage of the very largest diesel engines. In North America, diesel engines are primarily used in large trucks, where the low-stress, high-efficiency cycle leads to much longer engine life and lower operational costs. These advantages also make the diesel engine ideal for use in the heavy-haul railroad and earthmoving environments. Many model airplanes use very simple "glow" and "diesel" engines. Glow engines use glow plugs. "Diesel" model airplane engines have variable compression ratios. Both types depend on special fuels. Some 19th-century or earlier experimental engines used external flames, exposed by valves, for ignition, but this becomes less attractive with increasing compression. (It was the research of Nicolas Léonard Sadi Carnot that established the thermodynamic value of compression.) A historical implication of this is that the diesel engine could have been invented without the aid of electricity.
https://en.wikipedia.org/wiki?curid=8483
Deus Ex (video game) Deus Ex is a 2000 action role-playing video game developed by Ion Storm and published by Eidos Interactive. Set in a cyberpunk-themed dystopian world in the year 2052, the game follows JC Denton, an agent of the fictional agency United Nations Anti-Terrorist Coalition (UNATCO), who is given superhuman abilities by nanotechnology, as he sets out to combat hostile forces in a world ravaged by inequality and a deadly plague. His missions entangle him in a conspiracy that brings him into conflict with the Triads, Majestic 12, and the Illuminati. "Deus Ex"s gameplay combines elements of the first-person shooter with stealth elements, adventure, and role-playing genres, allowing for its tasks and missions to be completed in a variety of ways, which in turn lead to differing outcomes. Presented from the first-person perspective, the player can customize Denton's various abilities such as weapon skills or lockpicking, increasing his effectiveness in these areas; this opens up different avenues of exploration and methods of interacting with or manipulating other characters. The player can complete side missions away from the primary storyline by moving freely around the available areas, which can reward the player with experience points to upgrade abilities and alternative ways to tackle main missions. Powered by the Unreal Engine, the game was released for Microsoft Windows in June 2000, with a Mac OS port following the next month. A modified version of the game was released for the PlayStation 2 in 2002. In the years following its release, "Deus Ex" has received additional improvements and content from its fan community. The game received critical acclaim, including being named "Best PC Game of All Time" in "PC Gamer"s "Top 100 PC Games" in 2011 and a poll carried out by the UK gaming magazine "PC Zone". It received several Game of the Year awards, drawing praise for its pioneering designs in player choice and multiple narrative paths. It has sold more than 1 million copies, as of April 23, 2009. The game led to a series, which includes the sequel "" (2003), and three prequels: "" (2011), "" (2013), and "" (2016). "Deus Ex" incorporates elements from four video game genres: role-playing, first-person shooter, adventure, and "immersive simulation," the last of which being a game where "nothing reminds you that you're just playing a game." For example, the game uses a first-person camera during gameplay and includes exploration and character interaction as primary features. The player assumes the role of JC Denton, a nanotech-augmented operative of the United Nations Anti-Terrorist Coalition (UNATCO). This nanotechnology is a central gameplay mechanism and allows players to perform superhuman feats. As the player accomplishes objectives, the player character is rewarded with "skill points." Skill points are used to enhance a character's abilities in eleven different areas, and were designed to provide players with a way to customize their characters; a player might create a combat-focused character by increasing proficiency with pistols or rifles, while a more furtive character can be created by focusing on lock picking and computer hacking abilities. There are four different levels of proficiency in each skill, with the skill point cost increasing for each successive level. Weapons may be customized through "weapon modifications," which can be found or purchased throughout the game. The player might add scopes, silencers, or laser sights; increase the weapon's range, accuracy, or magazine size; or decrease its recoil and reload time; as appropriate to the weapon type. Players are further encouraged to customize their characters through nano-augmentations—cybernetic devices that grant characters superhuman powers. While the game contains eighteen different nano-augmentations, the player can install a maximum of nine, as each must be used on a certain part of the body: one in the arms, legs, eyes, and head; two underneath the skin; and three in the torso. This forces the player to choose carefully between the benefits offered by each augmentation. For example, the arm augmentation requires the player to decide between boosting their character's skill in hand-to-hand combat or his ability to lift heavy objects. Interaction with non-player characters (NPCs) was a significant design focus. When the player interacts with a non-player character, the game will enter a cutscene-like conversation mode where the player advances the conversation by selecting from a list of dialogue options. The player's choices often have a substantial effect on both gameplay and plot, as non-player characters will react in different ways depending on the selected answer (e.g., rudeness makes them less likely to assist the player). "Deus Ex" features combat similar to first-person shooters, with real-time action, a first-person perspective, and reflex-based gameplay. As the player will often encounter enemies in groups, combat often tends toward a tactical approach, including the use of cover, strafing, and "hit-and-run." A "USA Today" reviewer found, "At the easiest difficulty setting, your character is puréed again and again by an onslaught of human and robotic terrorists until you learn the value of stealth." However, through the game's role-playing systems, it is possible to develop a character's skills and augmentations to create a tank-like combat specialist with the ability to deal and absorb large amounts of damage. Non-player characters will praise or criticize the main character depending on the use of force, incorporating a moral element into the gameplay. "Deus Ex" features a head-up display crosshair, whose size dynamically shows where shots will fall based on movement, aim, and the weapon in use; the reticle expands while the player is moving or shifting their aim, and slowly shrinks to its original size while no actions are taken. How quickly the reticle shrinks depends on the character's proficiency with the equipped weapon the number of accuracy modifications added to the weapon, and the level of the "Targeting" nano-augmentation. "Deus Ex" features twenty-four weapons, ranging from crowbars, electroshock weapons, and riot baton, to laser-guided anti-tank rockets and assault rifles; both lethal and non-lethal weapons are available. The player can also make use of several weapons of opportunity, such as fire extinguishers. Gameplay in "Deus Ex" emphasizes player choice. Objectives can be completed in numerous ways, including stealth, sniping, heavy frontal assault, dialogue, or engineering and computer hacking. This level of freedom requires that levels, characters, and puzzles be designed with significant redundancy, as a single play-through of the game will miss large sections of dialogue, areas, and other content. In some missions, the player is encouraged to avoid using deadly force, and specific aspects of the story may change depending on how violent or non-violent the player chooses to be. The game is also unusual in that two of its boss villains can be killed off early in the game, or left alive to be defeated later, and this too affects how other characters interact with the player. Because of its design focus on player choice, "Deus Ex" has been compared with "System Shock", a game that inspired its design. Together, these factors give the game a high degree of replayability, as the player will have vastly different experiences, depending on which methods they use to accomplish objectives. "Deus Ex" was designed as a single-player game, and the initial releases of the Windows and Macintosh versions of the game did not include multiplayer functionality. Support for multiplayer modes was later incorporated through patches. The component consists of three game modes: deathmatch, basic team deathmatch, and advanced team deathmatch. Five maps, based on levels from the single-player portion of the game, were included with the original multiplayer patch, but many user-created maps exist. The PlayStation 2 release of "Deus Ex" does not offer a multiplayer mode. In April 2014 it was announced that GameSpy would cease their masterserver services, also affecting "Deus Ex". A community-made patch for the multiplayer mode has been created as a response to this. "Deus Ex" takes place in an unspecified near future in an alternate history where real-world conspiracy theories are true. These include speculations regarding black helicopters, vaccinations, and FEMA, as well as Area 51, the ECHELON network, Men in Black, chupacabras (in the form of "greasels"), and grey aliens. Mysterious groups such as Majestic 12, the Illuminati, the Knights Templar, the Bilderberg Group, and the Trilateral Commission also either play a central part in the plot or are alluded to during the course of the game. The plot of "Deus Ex" depicts a society on a slow spiral into chaos. There is a massive division between the rich and the poor, not only socially, but in some cities physically. A lethal pandemic, known as the "Gray Death", ravages the world's population, especially within the United States, and has no cure. A synthetic vaccine, "Ambrosia", manufactured by the company VersaLife, nullifies the effects of the virus but is in critically short supply. Because of its scarcity, Ambrosia is available only to those deemed "vital to the social order", and finds its way primarily to government officials, military personnel, the rich and influential, scientists, and the intellectual elite. With no hope for the common people of the world, riots occur worldwide, and some terrorist organizations have formed with the professed intent of assisting the downtrodden, among them the National Secessionist Forces (NSF) of the U.S. and a French group known as Silhouette. To combat these threats to the world order, the United Nations has expanded its influence around the globe to form the United Nations Anti-Terrorist Coalition (UNATCO). It is headquartered near New York City in a bunker beneath Liberty Island, placed there after a terrorist strike on the Statue of Liberty. The main character of "Deus Ex" is UNATCO agent JC Denton (voiced by Jay Franke), one of the first in a new line of agents physically altered with advanced nanotechnology to gain superhuman abilities, alongside his brother Paul (also voiced by Jay Franke), who joined UNATCO to avenge his parents' deaths at the hands of Majestic 12. His UNATCO colleagues include the mechanically-augmented and ruthlessly efficient field agents Gunther Hermann and Anna Navarre; Quartermaster General Sam Carter, and the bureaucratic UNATCO chief Joseph Manderley. UNATCO communications tech Alex Jacobson's character model and name are based on Warren Spector's nephew, Alec Jacobson. JC's missions bring him into contact with various characters, including NSF leader Juan Lebedev, hacker and scientist Tracer Tong, nano-tech expert Gary Savage, Nicolette DuClare (daughter of an Illuminati member), former Illuminati leader Morgan Everett, the Artificial Intelligences (AI) Daedalus and Icarus, and Bob Page, owner of VersaLife and leader of Majestic 12, a clandestine organization that has usurped the infrastructure of the Illuminati, allowing him to control the world for his own ends. After completing his training, UNATCO agent JC Denton takes several missions given by Director Joseph Manderley to track down members of the National Secessionist Forces (NSF) and their stolen shipments of the "Ambrosia" vaccine, the treatment for the "Gray Death" virus. Through these missions, JC is reunited with his brother, Paul, who is also nano-augmented. JC tracks the Ambrosia shipment to a private terminal at LaGuardia Airport. Paul meets JC outside the plane and explains that he has defected from UNATCO and is working with the NSF after learning that the Gray Death is a human-made virus, with UNATCO using its power to make sure only the elite receive the vaccine. JC returns to UNATCO headquarters and is told by Manderley that both he and Paul have been outfitted with a 24-hour kill switch and that Paul's has been activated due to his betrayal. Manderley orders JC to fly to Hong Kong to eliminate Tracer Tong, a hacker whom Paul has contact with, and who can disable the kill switches. Instead, JC returns to Paul's apartment to find Paul hiding inside. Paul further explains his defection and encourages JC to also defect by sending out a distress call to alert the NSF's allies. Upon doing so, JC becomes a wanted man by UNATCO, and his kill switch is activated by Federal Emergency Management Agency (FEMA) Director Walton Simons. JC is unable to escape UNATCO forces, and both he and Paul (provided he survived the raid on the apartment) are taken to a secret prison below UNATCO headquarters. An entity named "Daedalus" contacts JC and informs him that the prison is part of Majestic 12, and arranges for him and Paul to escape. The two flee to Hong Kong to meet with Tong, who deactivates their kill switches. Tong requests that JC infiltrate the VersaLife building. Doing so, JC discovers that the corporation is the source for the Gray Death, and he can steal the plans for the virus and destroy the "universal constructor" (UC) that produces it. Analysis of the virus shows that it was manufactured by the Illuminati, prompting Tong to send JC to Paris to try to make contact with the organization and obtain their help fighting Majestic 12. JC meets with Illuminati leader Morgan Everett and learns that the Gray Death virus was intended to be used for augmentation technology, but Majestic 12, led by trillionaire businessman and former Illuminatus Bob Page, was able to steal and repurpose it into its viral form. Everett recognizes that without VersaLife's universal constructor, Majestic 12 can no longer create the virus, and will likely target Vandenberg Air Force Base, where X-51, a group of former Area 51 scientists, have built another one. After aiding the base personnel in repelling a Majestic 12 attack, JC meets X-51 leader Gary Savage, who reveals that Daedalus is an artificial intelligence (AI) borne out of the ECHELON program. Everett attempts to gain control over Majestic 12's communications network by releasing Daedalus onto the U.S. military networks, but Page counters by releasing his own AI, Icarus, which merges with Daedalus to form a new AI, Helios, with the ability to control all global communications. After this, Savage enlists JC's help in procuring schematics for reconstructing components for the UC that were damaged during Majestic 12's raid of Vandenberg. JC finds the schematics and electronically transmits them to Savage. Page intercepts the transmission and launches a nuclear missile at Vandenberg to ensure that Area 51 (now Majestic 12's headquarters) will be the only location in the world with an operational UC. However, JC can reprogram the missile to strike Area 51. JC travels there himself to confront Page. When JC locates him, Page reveals that he seeks to merge with Helios and gain full control over all nanotechnology. JC is contacted by Tong, Everett, and the Helios AI simultaneously. All three factions ask for his help in defeating Page while furthering their own objectives, and JC is forced to choose between them. Tong seeks to plunge the world into a Dark Age by destroying the global communications hub and preventing anyone from taking control of the world. Everett offers Denton the chance to bring the Illuminati back to power by killing Bob Page and using the technology of Area 51 to rule the world with an invisible hand. Helios wishes to merge with Denton and rule the world as a benevolent dictator with infinite knowledge and reason. The player's decision determines the course of the future and brings the game to a close. After Looking Glass Technologies and Origin Systems released "" in January 1993, producer Warren Spector began to plan "Troubleshooter", the game that would become "Deus Ex". In his 1994 proposal, he described the concept as ""Underworld"-style first-person action" in a real-world setting with "big-budget, nonstop action". After Spector and his team were laid off from Looking Glass, John Romero of Ion Storm offered him the chance to make his "dream game" without any restrictions. Preproduction for "Deus Ex" began around August 1997 and lasted roughly six months. The game's working title was "Shooter: Majestic Revelations", and it was scheduled for release on Christmas 1998. The team developed the setting before the game mechanics. Noticing his wife's fascination with "The X-Files", Spector connected the "real world, millennial weirdness, [and] conspiracy" topics on his mind and decided to make a game about them that would appeal to a broad audience. The "Shooter" design document cast the player as an augmented agent working against an elite cabal in the "dangerous and chaotic" 2050s. It cited "Half-Life", "Fallout", "", and "GoldenEye 007" as game design influences, and used the stories and settings of "", "The Manchurian Candidate", "Robocop", "The X-Files", and "Men in Black" as reference points. The team designed a skill system that featured "special powers" derived from nanotechnological augmentation and avoided the inclusion of die rolling and skills that required micromanagement. Spector also cited Konami's 1995 role-playing video game "Suikoden" as an inspiration, stating that the limited choices in "Suikoden" inspired him to expand on the idea with more meaningful choices in "Deux Ex". In early 1998, the "Deus Ex" team grew to 20 people, and the game entered a 28-month production phase. The development team consisted of three programmers, six designers, seven artists, a writer, an associate producer, a "tech", and Spector. Two writers and four testers were hired as contractors. Chris Norden was the lead programmer and assistant director, Harvey Smith the lead designer, Jay Lee the lead artist, and Sheldon Pacotti the lead writer. Close friends of the team who understood the intentions behind the game were invited to playtest and give feedback. The wide range of input led to debates in the office and changes to the game. Spector later concluded that the team was "blinded by promises of complete creative freedom", and by their belief that the game would have no budget, marketing, or time restraints. By mid-1998, the game's title had become "Deus Ex", derived from the Latin literary device "deus ex machina" ("god from the machine"), in which a plot is resolved by an unpredictable intervention. Spector felt that the best aspects of "Deus Ex"s development were the "high-level vision" and length of preproduction, flexibility within the project, testable "proto-missions", and the Unreal Engine license. The team's pitfalls included the management structure, unrealistic goals, underestimating risks with artificial intelligence, their handling of proto-missions, and weakened morale from bad press. "Deus Ex" was released on June 23, 2000, and published by Eidos Interactive for Microsoft Windows. The team planned third-party ports for Mac OS 9 and Linux. The original 1997 design document for "Deus Ex" privileges character development over all other features. The game was designed to be "genre-busting": in parts simulation, role-playing, first-person shooter, and adventure. The team wanted players to consider "who they wanted to be" in the game, and for that to alter how they behaved in the game. In this way, the game world was "deeply simulated", or realistic and believable enough that the player would solve problems in creative, emergent ways without noticing distinct puzzles. However, the simulation ultimately failed to maintain the desired level of openness, and they had to brute force "skill", "action", and "character interaction" paths through each level. Playtesting also revealed that their idea of a role-playing game based on the real world was more interesting in theory than in reality, as certain aspects of the real world, such as hotels and office buildings, were not compelling in a game. One of the things that Spector wanted to achieve in Deus Ex was to make JC Denton a cipher for the player, to create a better immersion and gameplay experience. He did not want the character to force any emotion, so that whatever feelings the player may be experiencing come from themselves rather than from JC Denton. To do this, Spector instructed voice actor Jay Anthony Franke to record his dialogue without any emotion but in a monotone voice, which is unusual for a voice acting role. Once coded, the team's game systems did not work as intended. The early tests of the conversation system and user interface were flawed. The team also found augmentations and skills to be less interesting than they had seemed in the design document. In response, Harvey Smith substantially revised the augmentations and skills. Production milestones served as wake-up calls for the game's direction. A May 1998 milestone that called for a functional demo revealed that the size of the game's maps caused frame rate issues, which was one of the first signs that maps needed to be cut. A year later, the team reached a milestone for finished game systems, which led to better estimates for their future mission work and the reduction of the 500-page design document to 270 pages. Spector recalled Smith's mantra on this point: "less is more". One of the team's biggest blind spots was the AI programming for NPCs. Spector wrote that they considered it in preproduction, but that they did not figure out how to handle it until "relatively late in development". This led to wasted time when the team had to discard their old AI code. The team built atop their game engine's shooter-based AI instead of writing new code that would allow characters to exhibit convincing emotions. As a result, NPC behavior was variable until the very end of development. Spector felt that the team's "sin" was their inconsistent display of a trustable "human AI". The game was developed on systems including dual-processor Pentium Pro 200s and Athlon 800s with eight and nine gigabyte hard drives, some using SCSI. The team used "more than 100 video cards" throughout development. "Deus Ex" was built using Visual Studio, Lightwave, and Lotus Notes. They also made a custom dialogue editor, ConEdit. The team used UnrealEd atop the Unreal game engine for map design, which Spector wrote was "superior to anything else available". Their trust in UnrealScript led them to code "special-cases" for their immediate mission needs instead of more generalized multi-case code. Even as concerned team members expressed misgivings, the team only addressed this later in the project. To Spector, this was a lesson to always prefer "general solutions" over "special casing", such that the toolset works predictably. They waited to license a game engine until after preproduction, expecting the benefits of licensing to be more time for the content and gameplay, which Spector reported to be the case. They chose the Unreal engine, as it did 80% of what they needed from an engine and was more economical than building from scratch. Their small programming team allowed for a larger design group. The programmers also found the engine accommodating, though it took about nine months to acclimate to the software. Spector felt that they would have understood the code better had they built it themselves, instead of "treating the engine as a black box" and coding conservatively. He acknowledged that this precipitated into the Direct3D issues in their final release, which slipped through their quality assurance testing. Spector also noted that the artificial intelligence, pathfinding, and sound propagation were designed for shooters and should have been rewritten from scratch instead of relying on the engine. He thought the licensed engine worked well enough that he expected to use the same for the game's sequel "" and "Thief 3". He added that developers should not attempt to force their technology to perform in ways it was not intended, and should find a balance between perfection and pragmatism. The soundtrack of "Deus Ex", composed by Alexander Brandon (primary contributor, including main theme), Dan Gardopée ("Naval Base" and "Vandenberg"), Michiel van den Bos ("UNATCO", "Lebedev's Airfield", "Airfield Action", "DuClare Chateau", plus minor contribution to some of Brandon's tracks), and Reeves Gabrels ("NYC Bar"), was praised by critics for complementing the gritty atmosphere predominant throughout the game with melodious and ambient music incorporated from a number of genres, including techno, jazz, and classical. The music sports a basic dynamic element, similar to the iMUSE system used in early 1990s LucasArts games; during play, the music will change to a different iteration of the currently playing song based on the player's actions, such as when the player starts a conversation, engages in combat, or transitions to the next level. All the music in the game is tracked - Gabrels' contribution, "NYC Bar", was converted to a module by Brandon. "Deus Ex" has been re-released in several iterations since its original publication and has also been the basis of several mods developed by its fan community. The "Deus Ex: Game of the Year Edition", which was released on May 8, 2001, contains the latest game updates and a software development kit, a separate soundtrack CD, and a page from a fictional newspaper featured prominently in "Deus Ex" titled "The Midnight Sun", which recounts recent events in the game's world. However, later releases of said version do not include the soundtrack CD and contain a PDF version of the newspaper on the game's disc. The Mac OS version of the game, released a month after the Windows version, was shipped with the same capabilities and can also be patched to enable multiplayer support. However, publisher Aspyr Media did not release any subsequent editions of the game or any additional patches. As such, the game is only supported in Mac OS 9 and the "Classic" environment in Mac OS X, neither of which are compatible with Intel-based Macs. The Windows version will run on Intel-based Macs using Crossover, Boot Camp, or other software to enable a compatible version of Windows to run on a Mac. A PlayStation 2 port of the game, retitled "Deus Ex: The Conspiracy" outside of Europe, was released on March 26, 2002. Along with motion-captured character animations and pre-rendered introductory and ending cinematics that replaced the original versions, it features a simplified interface with optional auto-aim. There are many minor changes in level design, some to balance gameplay, but most to accommodate loading transition areas, due to the memory limitations of the PlayStation 2. The PlayStation 2 version was re-released in Europe on the PlayStation 3 as a PlayStation 2 Classic on May 16, 2012. Loki Games worked on a Linux version of the game, but the company went out of business before releasing it. The OpenGL layer they wrote for the port, however, was sent out to Windows gamers through an online patch. Though their quality assurance did not see major Direct3D issues, players noted "dramatic slowdowns" immediately following the launch, and the team did not understand the "black box" of the Unreal engine well enough to make it do exactly what they needed. Spector characterized "Deus Ex" reviews into two categories based on how they begin with either how "Warren Spector makes games all by himself" or that ""Deus Ex" couldn't possibly have been made by Ion Storm". He has said that the game won over 30 "best of" awards in 2001, and concluded that their final game was not perfect, but that they were much closer for having tried to "do things right or not at all". "Deus Ex" was built on the Unreal Engine, which already had an active community of modders. In September 2000, Eidos Interactive and Ion Storm announced in a press release that they would be releasing the software development kit (SDK), which included all the tools used to create the original game. Several team members, as well as project director Warren Spector, stated that they were "really looking forward to seeing what [the community] does with our tools". The kit was released on September 22, 2000, and soon gathered community interest, followed by the release of tutorials, small mods, up to announcements of large mods and conversions. While Ion Storm did not hugely alter the engine's rendering and core functionality, they introduced role-playing elements. In 2009, a fan-made mod called "The Nameless Mod" ("TNM") was released by Off Topic Productions. The game's protagonist is a user of an Internet forum, with digital places represented as physical locations. The mod offers roughly the same amount of gameplay as Deus Ex and adds several new features to the game, with a more open world structure than "Deus Ex" and new weapons such as the player character's fists. The mod was developed over seven years and has thousands of lines of recorded dialogue and two different parallel story arcs. Upon its release, "TNM" earned a 9/10 overall from "PC PowerPlay" magazine. In Mod DB's 2009 Mod of the Year awards, "The Nameless Mod" won the Editor's Choice award for Best Singleplayer Mod. In 2015, during the 15th anniversary of the game's release, Square Enix (who had acquired Eidos earlier) endorsed a free fan-created mod, "Deus Ex: Revision" which was released through Steam. The mod, created by Caustic Creative, is a graphical overhaul of the original game, adding in support for newer versions of DirectX, improving the textures and the soundtrack from the original game, and adding in more world-building aesthetics. According to "Computer Gaming World"s Stefan Janicki, "Deus Ex" had "sold well in North America" by early 2001. In the United States, it debuted at #6 on PC Data's sales chart for the week ending June 24, at an average retail price of $40. It fell to eighth place in its second week but rose again to position 6 in its third. It proceeded to place in the top 10 rankings for August 6–12 and the week ending September 2 and to secure 10th place overall for the months of July and August. "Deus Ex" achieved sales of 138,840 copies and revenues of $5 million in the United States by the end of 2000, according to PC Data. The firm tracked another 91,013 copies sold in the country during 2001. The game was a larger hit in Europe; Janicki called it a "blockbuster" for the region, which broke a trend of weak sales for 3D games. He wrote, "[I]n Europe—particularly in England—the action/RPG dominated the charts all summer, despite competition from heavyweights like "Diablo II" and "The Sims"." In the German-speaking market, "PC Player" reported sales over 70,000 units for "Deus Ex" by early 2001. It debuted at #3 in the region for July 2000 and held the position in August, before dropping to #10, #12 and #27 over the following three months. In the United Kingdom, "Deus Ex" reached #1 on the sales charts during August and spent three months in the top 10. It received a "Silver" award from the Entertainment and Leisure Software Publishers Association (ELSPA) in February 2002, indicating lifetime sales of at least 100,000 units in the United Kingdom. The ELSPA later raised it to "Gold" status, for 200,000 sales. In April 2009, Square-Enix revealed that "Deus Ex" had surpassed 1 million sales globally, but was outsold by "". "Deus Ex" received critical acclaim, attaining a score of 90 out of 100 from 28 critics on Metacritic. Thierry Nguyen from "Computer Gaming World" said that the game "delivers moments of brilliance, idiocy, ingenuity, and frustration". "Computer Games Magazine" praised the title for its deep gameplay and its use of multiple solutions to situations in the game. Similarly, "Edge" highlighted the game's freedom of choice, saying that "Deus Ex" "never tells you what to do. Goals are set, but alter according to your decisions." "Eurogamer"s Rob Fahey lauded the game, writing, "Moody and atmospheric, compelling and addictive, this is first person gaming in grown-up form, and it truly is magnificent." Former GameSpot reviewer Greg Kasavin, though awarding the game a score of 8.2 of 10, was disappointed by the security and lockpicking mechanics. "Such instances are essentially noninteractive", he wrote. "You simply stand there and spend a particular quantity of electronic picks or modules until the door opens or the security goes down." Kasavin made similar complaints about the hacking interface, noting that "Even with basic hacking skills, you'll still be able to bypass the encryption and password protection ... by pressing the 'hack' button and waiting a few seconds". The game's graphics and voice acting were also met with muted enthusiasm. Kasavin complained of "Deus Ex"s relatively sub-par graphics, blaming them on the game's "incessantly dark industrial environments". GamePro reviewer Chris Patterson took the time to note that despite being "solid acoustically", "Deus Ex" had moments of weakness. He poked fun at JC's "Joe Friday, 'just the facts', deadpan", and the "truly cheesy accents" of minor characters in Hong Kong and New York City. IGN called the graphics "blocky", adding that "the animation is stiff, and the dithering is just plain awful in some spots", referring to the limited capabilities of the Unreal Engine used to design the game. The website, later on, stated that "overall Deus Ex certainly looks better than your average game". Reviewers and players also complained about the size of "Deus Ex"s save files. An Adrenaline Vault reviewer noted that "Playing through the entire adventure, [he] accumulated over 250 MB of save game data, with the average file coming in at over 15 MB." Jeff Lundrigan reviewed the PC version of the game for "Next Generation", rating it five stars out of five, and stated that "This is hands-down one of the best PC games ever made. Stop reading and go get yours now." The game developed a strong cult following, leading to a core modding and playing community that remained active over 15 years after its release. In an interview with IGN in June 2015, game director Warren Spector said he never expected "Deus Ex" to sell many copies, but he did expect it to become a cult classic among a smaller, active community, and he continues to receive fan mail from players to date regarding their experiences and thoughts about "Deus Ex". "Deus Ex" received over 30 "best of" awards in 2001, from outlets such as IGN, GameSpy, "PC Gamer", "Computer Gaming World", and The Adrenaline Vault. It won "Excellence in Game Design" and "Game Innovation Spotlight" at the 2001 Game Developers Choice Awards, and it was nominated for "Game of the Year". At the Interactive Achievement Awards, it won in the "Computer Innovation" and "Computer Action/Adventure" categories and received nominations for "Sound Design", "PC Role-Playing", and "Game of the Year" in both the PC and overall categories. The British Academy of Film and Television Arts named it "PC Game of the Year". The game also collected several "Best Story" accolades, including first prize in Gamasutra's 2006 "Quantum Leap" awards for storytelling in a video game. Since its release, "Deus Ex" has appeared in several "Greatest Games of All Time" lists and Hall of Fame features. It was included in IGN's "100 Greatest Games of All Time" (#40, #21 and #34 in 2003, 2005 and 2007, respectively), "Top 25 Modern PC Games" (4th place in 2010) and "Top 25 PC Games of All Time" (#20 and #21 in 2007 and 2009 respectively) lists. GameSpy featured the game in its "Top 50 Games of All Time" (18th place in 2001) and "25 Most Memorable Games of the Past 5 Years" (15th place in 2004) lists, and in the site's "Hall of Fame". "PC Gamer" placed "Deus Ex" on its "Top 100 PC Games of All Time" (#2, #2, #1 by staff and #4 by readers in 2007, 2008, 2010 and 2010 respectively) and "50 Best Games of All Time" (#10 and #27 in 2001 and 2005) lists, and it was awarded 1st place in "PC Zone"s "101 Best PC Games Ever" feature. It was also included in Yahoo! UK Video Games' "100 Greatest Computer Games of All Time" (28th place) list, and in "Edge"s "The 100 Best Videogames" (29th place in 2007) and "100 Best Games to Play Today" (57th place in 2009) lists. "Deus Ex" was named the second-best game of the 2000s by Gamasutra. In 2012, "Time" named it one of the 100 greatest video games of all time, and G4tv ranked it as the 53rd best game of all time for its "complex and well-crafted story that was really the start of players making choices that genuinely affect the outcome". 1UP.com listed it as one of the most important games of all time, calling its influence "too massive to properly gauge". A film adaptation based on the game was initially announced in May 2002 by Columbia Pictures. The film was being produced by Laura Ziskin, along with Greg Pruss attached with writing the screenplay. Peter Schlessel, president of the production for Columbia Pictures, and Paul Baldwin, president of marketing for Eidos Interactive, stated that they were confident in that the adaptation would be a successful development for both the studios and the franchise. In March 2003, during an interview with Greg Pruss, he informed IGN that the character of JC Denton would be "a little bit filthier than he was in the game". He further stated that the script was shaping up to be darker in tone than the original game. Although a release date was scheduled for 2006, the film did not get past the scripting stage. In 2012, CBS films revived the project, buying the rights and commissioning a film inspired by the "Deux Ex" series; its direct inspiration was the 2011 game "". C. Robert Cargill and Scott Derrickson wee to write the screenplay, and Derrickson will direct the film. A sequel to the game titled "", was released in the United States on December 2, 2003, and then in Europe in early 2004 for both the PC and the Xbox game console. A second sequel, titled "Deus Ex: Clan Wars", was initially conceived as a multiplayer-focused third game for the series. After the commercial performance and public reception of "Deus Ex: Invisible War" failed to meet expectations, the decision was made to set the game in a separate universe, and "Deus Ex: Clan Wars" was eventually published under the title "Project: Snowblind". On March 29, 2007, Valve announced "Deus Ex" and its sequel would be available for purchase from their Steam service. Among the games announced are several other Eidos franchise titles, including "" and "Tomb Raider". Eidos Montréal produced a prequel to "Deus Ex" called "". This was confirmed on November 26, 2007, when Eidos Montréal posted a teaser trailer for the title on their website. The game was released on August 23, 2011, for the PC, PlayStation 3, and Xbox 360 platforms and received critical acclaim. On April 7, 2015, Eidos announced a sequel to "Deus Ex: Human Revolution" and second prequel to "Deus Ex" titled "". It was released on August 23, 2016.
https://en.wikipedia.org/wiki?curid=8484
Diego Maradona Diego Armando Maradona (, born 30 October 1960) is an Argentine football manager and retired professional footballer. He is currently the coach of Argentine Primera División club Gimnasia de La Plata. He is widely regarded as one of the greatest football players of all time. He was one of the two joint winners of the FIFA Player of the 20th Century award. Maradona's vision, passing, ball control and dribbling skills were combined with his small stature (), which gave him a low center of gravity allowing him to maneuver better than most other football players; he would often dribble past multiple opposing players on a run. His presence and leadership on the field had a great effect on his team's general performance, while he would often be singled out by the opposition. In addition to his creative abilities, he also possessed an eye for goal and was known to be a free kick specialist. A precocious talent, Maradona was given the nickname ""El Pibe de Oro"" ("The Golden Boy"), a name that stuck with him throughout his career. An advanced playmaker who operated in the classic number 10 position, Maradona was the first player in football history to set the world record transfer fee twice, first when he transferred to Barcelona for a then world record £5 million, and second, when he transferred to Napoli for another record fee £6.9 million. He played for Argentinos Juniors, Boca Juniors, Barcelona, Napoli, Sevilla and Newell's Old Boys during his club career, and is most famous for his time at Napoli and Barcelona where he won numerous accolades. In his international career with Argentina, he earned 91 caps and scored 34 goals. Maradona played in four FIFA World Cups, including the 1986 World Cup in Mexico where he captained Argentina and led them to victory over West Germany in the final, and won the Golden Ball as the tournament's best player. In the 1986 World Cup quarter final, he scored both goals in a 2–1 victory over England that entered football history for two different reasons. The first goal was an unpenalized handling foul known as the "Hand of God", while the second goal followed a dribble past five England players, voted "Goal of the Century" by FIFA.com voters in 2002. Maradona became coach of Argentina in November 2008. He was in charge of the team at the 2010 World Cup in South Africa before leaving at the end of the tournament. He coached Dubai-based club Al Wasl in the UAE Pro-League for the 2011–12 season. In 2017, Maradona became the coach of Fujairah before leaving at the end of the season. In May 2018, Maradona was announced as the new chairman of Belarusian club Dynamo Brest. He arrived in Brest and was presented by the club to start his duties in July. From September 2018 to June 2019, Maradona was coach of Mexican club Dorados. Diego Armando Maradona was born on 30 October 1960, at the Policlínico (Polyclinic) Evita Hospital in Lanús, Buenos Aires Province, but raised in Villa Fiorito, a shantytown on the southern outskirts of Buenos Aires, Argentina, to a poor family that had moved from Corrientes Province. He was the first son after four daughters. He has two younger brothers, Hugo ("el Turco") and Raúl (Lalo), both of whom were also professional football players. His parents were Diego Maradona "Chitoro" (d. 2015) and Dalma Salvadora Franco 'Doña Tota' (1930–2011). They were both born and brought up in the town of Esquina in the north-east province of Corrientes Province, living only two hundred metres from each other on the banks of the Corriente River. In 1950, they left Esquina and settled in Buenos Aires. At age eight, Maradona was spotted by a talent scout while he was playing in his neighbourhood club Estrella Roja. He became a staple of "Los Cebollitas" (The Little Onions), the junior team of Buenos Aires's Argentinos Juniors. As a 12-year-old ball boy, he amused spectators by showing his wizardry with the ball during the halftime intermissions of first division games. He named Brazilian playmaker Rivelino and Manchester United winger George Best among his inspirations growing up. On 20 October 1976, Maradona made his professional debut for Argentinos Juniors, 10 days before his 16th birthday, vs. Talleres de Córdoba. He entered to the pitch wearing the number 16 jersey, and became the youngest player in the history of Argentine Primera División. Few minutes after debuting, Maradona kicked the ball through Juan Domingo Cabrera's legs, making a nutmeg that would become legendary. After the game, Maradona said, "That day I felt I had held the sky in my hands." Thirty years later, Cabrera remembered Maradona's debut: "I was on the right side of the field and went to press him, but he didn't give me a chance. He made the nutmeg and when I turned around, he was far away from me". Maradona scored his first goal in the Primera División against Marplatense team San Lorenzo on 14 November 1976, two weeks after turning 16. Maradona spent five years at Argentinos Juniors, from 1976 to 1981, scoring 115 goals in 167 appearances before his US$ 4 million transfer to Boca Juniors. Maradona received offers to join other clubs, including River Plate who offered to make him the club's best paid player. Nevertheless, Maradona expressed his will to be transferred to Boca Juniors, the team he always wanted to play for. Maradona signed a contract with Boca Juniors on 20 February 1981. He made his debut two days later against Talleres de Córdoba, scoring twice in the club's 4–1 win. On 10 April, Maradona played his first "Superclásico" against River Plate at La Bombonera stadium. Boca defeated River 3–0 with Maradona scoring a goal after dribbling past Alberto Tarantini and Fillol. Despite the distrustful relationship between Maradona and Boca Juniors manager, Silvio Marzolini, Boca had a successful season, winning the league title after securing a point against Racing Club. That would be the only title won by Maradona in the Argentine domestic league. After the 1982 World Cup, in June, Maradona was transferred to Barcelona in Spain for a then world record fee of £5 million ($7.6 million). In 1983, under coach César Luis Menotti, Barcelona and Maradona won the Copa del Rey (Spain's annual national cup competition), beating Real Madrid, and the Spanish Super Cup, beating Athletic Bilbao. On 26 June 1983, Barcelona defeated Real Madrid on the road in one of the world's biggest club games, "El Clásico", a match where Maradona scored and became the first Barcelona player to be applauded by archrival Real Madrid fans. Maradona dribbled past Madrid goalkeeper Agustín, and as he approached the empty goal, he stopped just as Madrid defender Juan José came sliding in a desperate attempt to block the shot and ended up crashing into the post, before Maradona slotted the ball into the net. The manner of the goal led to many inside the stadium start applauding; only Ronaldinho (in November 2005) and Andrés Iniesta (in November 2015) have since been granted such an ovation as Barcelona players from Madrid fans at the Santiago Bernabéu. Due to illness and injury as well as controversial incidents on the field, Maradona had a difficult tenure in Barcelona. First a bout of hepatitis, then a broken ankle in a La Liga game at the Camp Nou in September 1983 caused by an ill-timed tackle by Athletic Bilbao's Andoni Goikoetxea, threatened to jeopardize Maradona's career, but with treatment and therapy, it was possible for him to return to the pitch after a three-month recovery period. The end of the 1983–84 season included a violent and chaotic fight Maradona was directly involved in at the 1984 Copa del Rey final at the Santiago Bernabéu in Madrid against Athletic Bilbao. After receiving another rough tackle by Goikoetxea which wounded his leg, being taunted with xenophobic, racist insults related to his father's Native American ancestry throughout the match by Bilbao fans, and being provoked by Bilbao's Miguel Sola at full time as Barcelona lost 1–0, Maradona snapped. He aggressively got up, stood inches from Sola's face and the two exchanged words. This started a chain reaction of emotional reactions from both teams. Using expletives, Sola mimicked a gesture from the crowd towards Maradona by using a xenophobic term. Maradona then headbutted Sola, elbowed another Bilbao player in the face and kneed another player in the head, knocking him out cold. The Bilbao squad surrounded Maradona to exact some retribution with Goikoetxea connecting with a high kick to his chest, before the rest of the Barcelona squad joined in to help Maradona. From this point, Barcelona and Bilbao players brawled on the field with Maradona in the centre of the action, kicking and punching anyone in a Bilbao shirt. The mass brawl was played out in front of the Spanish King Juan Carlos and an audience of 100,000 fans inside the stadium, and more than half of Spain watching on television. After fans began throwing solid objects on the field at the players, coaches and even photographers, sixty people were injured, with the incident effectively sealing Maradona's transfer out of the club in what was his last game in a Barcelona shirt. One Barcelona executive stated, "When I saw those scenes of Maradona fighting and the chaos that followed I realized we couldn't go any further with him." Maradona got into frequent disputes with FC Barcelona executives, particularly club president Josep Lluís Núñez, culminating with a demand to be transferred out of Camp Nou in 1984. During his two injury-hit seasons at Barcelona, Maradona scored 38 goals in 58 games. Maradona transferred to Napoli in Italy's Serie A for another world record fee, £6.9 million ($10.48M). Maradona arrived in Naples and was presented to the world media as a Napoli player on 5 July 1984, where he was welcomed by 75,000 fans at his presentation at the Stadio San Paolo. Sports writer David Goldblatt commented, "They [the fans] were convinced that the saviour had arrived." A local newspaper stated that despite the lack of a "mayor, houses, schools, buses, employment and sanitation, none of this matters because we have Maradona". Prior to Maradona's arrival, Italian football was dominated by teams from the north and centre of the country, such as A.C. Milan, Juventus, Inter Milan and Roma, and no team in the south of the Italian Peninsula had ever won a league title. At Napoli, Maradona reached the peak of his professional career: he soon inherited the captain's armband from Napoli veteran defender Giuseppe Bruscolotti and quickly became an adored star among the club's fans; in his time there he elevated the team to the most successful era in its history. Maradona played for Napoli at a period when north–south tensions in Italy were at a peak due to a variety of issues, notably the economic differences between the two. Led by Maradona, Napoli won their first ever Serie A Italian Championship in 1986–87. Goldblatt wrote, "The celebrations were tumultuous. A rolling series of impromptu street parties and festivities broke out contagiously across the city in a round-the-clock carnival which ran for over a week. The world was turned upside down. The Neapolitans held mock funerals for Juventus and Milan, burning their coffins, their death notices announcing 'May 1987, the other Italy has been defeated. A new empire is born.'" Murals of Maradona were painted on the city's ancient buildings, and newborn children were named in his honor. The following season, the team's prolific attacking trio, formed by Maradona, Bruno Giordano and Careca, was later dubbed the "Ma-Gi-Ca" ("magical") front-line. Napoli would win their second league title in 1989–90, and finish runners up in the league twice, in 1987–88 and 1988–89. Other honors during the Maradona era at Napoli included the Coppa Italia in 1987, (as well as a second place finish in the Coppa Italia in 1989), the UEFA Cup in 1989 and the Italian Supercup in 1990. During the 1989 UEFA Cup Final against Stuttgart, Maradona scored from a penalty in a 2–1 home victory in the first leg, later assisting Careca's match–winning goal, while in the second leg on 17 May – a 3–3 away draw –, he assisted Ciro Ferrara's goal with a header. Despite primarily playing in a creative role as an attacking midfielder, Maradona was the top scorer in Serie A in 1987–88, with 15 goals, and was the all-time leading goalscorer for Napoli, with 115 goals, until his record was broken by Marek Hamšík in 2017. When asked who was the toughest player he ever faced, A.C. Milan central defender Franco Baresi stated, that it was Maradona, a view shared by his Milan teammate Paolo Maldini, who viewed Maradona and Ronaldo as the best players he ever faced, stating in 2008, "The best ever I played against was Maradona." While Maradona was successful on the field during his time in Italy, his personal problems increased. His cocaine use continued, and he received US$70,000 in fines from his club for missing games and practices, ostensibly because of "stress". He faced a scandal there regarding an illegitimate son, and he was also the object of some suspicion over an alleged friendship with the Camorra. Later on, in honour of Maradona and his achievements during his career at Napoli, the number 10 jersey of Napoli was officially retired. After serving a 15-month ban for failing a drug test for cocaine, Maradona left Napoli in disgrace in 1992. Despite interest from Real Madrid and Marseille, he signed for Sevilla, where he stayed for one year. In 1993, he played for Newell's Old Boys and in 1995 returned to Boca Juniors for a two-year stint. Maradona also appeared for Tottenham Hotspur in a testimonial match for Osvaldo Ardiles against Internazionale, shortly before the 1986 World Cup. Maradona was himself given a testimonial match in November 2001, played between an all-star World XI and the Argentina national team. During his time with the Argentina national team, Maradona scored 34 goals in 91 appearances. He made his full international debut at age 16, against Hungary, on 27 February 1977. Maradona was left off the Argentine squad for the 1978 World Cup on home soil by coach César Luis Menotti who felt he was too young at age 17. At age 18, Maradona played the 1979 FIFA World Youth Championship in Japan and emerged as the star of the tournament, shining in Argentina's 3–1 final win over the Soviet Union. On 2 June 1979, Maradona scored his first senior international goal in a 3–1 win against Scotland at Hampden Park. He went on to play for Argentina in two 1979 Copa América ties during August 1979, a 2–1 loss against Brazil and a 3–0 win over Bolivia in which he scored his side's third goal. Speaking thirty years later on the impact of Maradona's performances in 1979, FIFA President Sepp Blatter stated, "Everyone has an opinion on Diego Armando Maradona, and that’s been the case since his playing days. My most vivid recollection is of this incredibly gifted kid at the second FIFA U-20 World Cup in Japan in 1979. He left everyone open-mouthed every time he got on the ball." Maradona and his compatriot Lionel Messi are the only players to win the Golden Ball at both the FIFA U-20 World Cup and FIFA World Cup. Maradona did so in 1979 and 1986, which Messi emulated in 2005 and 2014. Maradona played his first World Cup tournament in 1982 in his new country of residence, Spain. Argentina played Belgium in the opening game of the 1982 Cup at the Camp Nou in Barcelona. The Catalan crowd was eager to see their new world-record signing Maradona in action, but he did not perform to expectations, as Argentina, the defending champions, lost 1–0. Although the team convincingly beat both Hungary and El Salvador in Alicante to progress to the second round, there were internal tensions within the team, with the younger, less experienced players at odds with the older, more experienced players. In a team that also included such players as Mario Kempes, Osvaldo Ardiles, Ramón Díaz, Daniel Bertoni, Alberto Tarantini, Ubaldo Fillol and Daniel Passarella, the Argentine side was defeated in the second round by Brazil and by eventual winners Italy. The Italian match is renowned for Maradona being aggressively man-marked by Claudio Gentile, as Italy beat Argentina at the Sarrià Stadium in Barcelona, 2–1. Maradona played in all five matches without being substituted, scoring twice against Hungary. He was fouled repeatedly in all five games and particularly in the last one against Brazil at the Sarrià, a game that was blighted by poor officiating and violent fouls. With Argentina already down 3–0 to Brazil, Maradona's temper eventually got the better of him and he was sent off with five minutes remaining for a serious retaliatory foul against Batista. Maradona captained the Argentine national team to victory in the 1986 World Cup in Mexico, winning the final in Mexico City against West Germany. Throughout the tournament, Maradona asserted his dominance and was the most dynamic player of the tournament. He played every minute of every Argentina game, scoring five goals and making five assists, three of those in the opening match against South Korea at the Olimpico Universitario Stadium in Mexico City. His first goal of the tournament came against Italy in the second group game in Puebla. Argentina eliminated Uruguay in the first knockout round in Puebla, setting up a match against England at the Azteca Stadium, also in Mexico City. After scoring two contrasting goals in the 2–1 quarter-final win against England, his legend was cemented. The majesty of his second goal and the notoriety of his first led to the French newspaper "L'Equipe" describing Maradona as "half-angel, half-devil". This match was played with the background of the Falklands War between Argentina and the United Kingdom. Replays showed that the first goal was scored by striking the ball with his hand. Maradona was coyly evasive, describing it as "a little with the head of Maradona and a little with the hand of God". It became known as the "Hand of God". Ultimately, on 22 August 2005, Maradona acknowledged on his television show that he had hit the ball with his hand purposely, and no contact with his head was made, and that he immediately knew the goal was illegitimate. This became known as an international fiasco in World Cup history. The goal stood, much to the wrath of the English players. Maradona's second goal, just four minutes after the hotly disputed hand-goal, was later voted by FIFA as the greatest goal in the history of the World Cup. He received the ball in his own half, swivelled around and with 11 touches ran more than half the length of the field, dribbling past five English outfield players (Peter Beardsley, Steve Hodge, Peter Reid, Terry Butcher and Terry Fenwick) before he left goalkeeper Peter Shilton on his backside with a feint, and slotted the ball into the net. This goal was voted "Goal of the Century" in a 2002 online poll conducted by FIFA. A 2002 Channel 4 poll in the UK saw his performance ranked number 6 in the list of the 100 Greatest Sporting Moments. Maradona followed this with two more goals in a semi-final match against Belgium at the Azteca, including another virtuoso dribbling display for the second goal. In the final match, West Germany attempted to contain him by double-marking, but he nevertheless found the space past the West German player Lothar Matthäus to give the final pass to Jorge Burruchaga for the winning goal. Argentina beat West Germany 3–2 in front of 115,000 fans at the Azteca with Maradona lifting the World Cup as captain. During the course of the tournament, Maradona attempted or created more than half of Argentina's shots, attempted a tournament best 90 dribbles – some three times more than any other player – and was fouled a record 53 times, winning his team twice as many free kicks as any player. Maradona scored or assisted 10 of Argentina's 14 goals (71%), including the assist for the winning goal in the final, ensuring that he would be remembered as one of the greatest names in football history. By the end of the World Cup, Maradona went on to win the Golden Ball as the best player of the tournament by unanimous vote and was widely regarded to have won the World Cup virtually single-handedly, something that he later stated he did not entirely agree with. Zinedine Zidane, watching the 1986 World Cup as a 14-year-old, stated Maradona "was on another level". In a tribute to him, Azteca Stadium authorities built a statue of him scoring the "Goal of the Century" and placed it at the entrance of the stadium. Regarding Mardona's performance at the 1986 World Cup in Mexico, in 2014, Roger Bennett of "ESPN FC" described it as "the most virtuoso performance a World Cup has ever witnessed," while Kevin Baxter of the "Los Angeles Times" called it "one of the greatest individual performances in tournament history," with Steven Goff of "The Washington Post" instead dubbing his performance as "one of the finest in tournament annals." In 2002, Russell Thomas of "The Guardian" described Maradona's second goal against England in the 1986 World Cup quarter-finals as "arguably the greatest individual goal ever." In a 2009 article for "CBC Sports", John Molinaro described the goal as "the greatest ever scored in the tournament – and, maybe, in soccer." In a 2018 article for "Sportsnet", he added: "No other player, not even Pel[é] in 1958 nor Paolo Rossi in 1982, had dominated a single competition the way Maradona did in Mexico." He also went on to say of Maradona's performance: "The brilliant Argentine artist single-handedly delivered his country its second World Cup." Regarding his two memorable goals against England in the quarter-finals, he commented: "Yes, it was Maradona’s hand, and not God’s, that was responsible for the first goal against England. But while the 'Hand of God' goal remains one of the most contentious moments in World Cup history, there can be no disputing that his second goal against England ranks as the greatest ever scored in the tournament. It transcended mere sports—his goal was pure art." Maradona captained Argentina again in the 1990 World Cup in Italy to yet another World Cup final. An ankle injury affected his overall performance, and he was much less dominant than four years earlier. After losing their opening game to Cameroon at the San Siro in Milan, Argentina were almost eliminated in the first round, only qualifying in third position from their group. In the round of 16 match against Brazil in Turin, Claudio Caniggia scored the only goal after being set up by Maradona. In the quarter-final, Argentina faced Yugoslavia in Florence; the match ended 0–0 after 120 minutes, with Argentina advancing in a penalty shootout even though Maradona's kick, a weak shot to the goalkeeper's right, was saved. The semi-final against the host nation Italy at Maradona's club stadium in Naples, the Stadio San Paolo, was also resolved on penalties after a 1–1 draw. This time, however, Maradona was successful with his effort, daringly rolling the ball into the net with an almost exact replica of his unsuccessful kick in the previous round. At the final in Rome, Argentina lost 1–0 to West Germany, the only goal being a penalty by Andreas Brehme in the 85th minute after a controversial foul on Rudi Völler. At the 1994 World Cup in the United States, Maradona played in only two games (both at the Foxboro Stadium near Boston), scoring one goal against Greece, before being sent home after failing a drug test for ephedrine doping. After scoring against Greece, Maradona had one of the most infamous World Cup goal celebrations as he ran towards one of the sideline cameras shouting with a distorted face and bulging eyes. This turned out to be Maradona's last international goal for Argentina in what was his last appearance for his country. In his autobiography, Maradona argued that the test result was due to his personal trainer giving him the power drink Rip Fuel. His claim was that the U.S. version, unlike the Argentine one, contained the chemical and that, having run out of his Argentine dosage, his trainer unwittingly bought the U.S. formula. FIFA expelled him from USA '94, and Argentina were subsequently eliminated in the second round by Romania in Los Angeles. Maradona has also separately claimed that he had an agreement with FIFA, on which the organization reneged, to allow him to use the drug for weight loss before the competition in order to be able to play. His failed drugs test at the 1994 World Cup signaled the end of his international career, which had lasted 17 years and yielded 34 goals from 91 games, as well as one winner's medal and one runners-up medal in the World Cup. Described as a "classic number 10" in the media, Maradona was a traditional playmaker who usually played in a free role, either as an attacking midfielder behind the forwards, or as a second striker in a front–two, although he was also deployed as an offensive–minded central midfielder in a 4–4–2 formation on occasion. Maradona was renowned for his dribbling ability, vision, close ball control, passing and creativity, and is considered one of the most skillful players in the sport. He had a compact physique, and with his strong legs, low center of gravity, and resulting balance, he could withstand physical pressure well while running with the ball, despite his small stature, while his acceleration, quick feet, and agility, combined with his dribbling skills and close control at speed, allowed him to change direction quickly, making him difficult for opponents to defend against. He is regarded by several pundits and football figures as one of the greatest dribblers in the history of the game; former Dutch player Johan Cruyff saw similarities between Maradona and Lionel Messi with the ball seemingly attached to their body when dribbling. His physical strengths were illustrated by his two goals against Belgium in the 1986 World Cup. Although he was known for his penchant for undertaking individual runs with the ball, he was also a strategist and an intelligent team player, with excellent spatial awareness, as well as being highly technical with the ball. He could manage himself effectively in limited spaces, and would attract defenders only to quickly dash out of the melee (as in the second 1986-goal against England), or give an assist to a free teammate. Being short, but strong, he could hold the ball long enough with a defender on his back to wait for a teammate making a run or to find a gap for a quick shot. He showed leadership qualities on the field and captained Argentina in their World Cup campaigns of 1986, 1990 and 1994. While he was primarily a creative playmaker, Maradona was also known for his finishing and goalscoring ability. Former Milan manager Arrigo Sacchi also praised Maradona for his defensive work–rate off the ball in a 2010 interview with "Il Corriere dello Sport". The team leader on and off the field – he would speak up on a range of issues on behalf of the players – Maradona's ability as a player and his overpowering personality had a major positive effect on his team, with his 1986 World Cup teammate Jorge Valdano stating: "Maradona was a technical leader: a guy who resolved all difficulties that may come up on the pitch. Firstly, he was in charge of making the miracles happen, that's something that gives team-mates a lot of confidence. Secondly, the scope of his celebrity was such that he absorbed all the pressures on behalf of his team-mates. What I mean is: one slept soundly the night before a game not just because you knew you were playing next to Diego and Diego did things no other player in the world could do, but also because unconsciously we knew that if it was the case that we lost then Maradona would shoulder more of the burden, would be blamed more, than the rest of us. That was the kind of influence he exercised on the team." Lauding the "charisma" of Maradona, another of his Argentina teammates, prolific striker Gabriel Batistuta, stated, "Diego could command a stadium, have everyone watch him. I played with him and I can tell you how technically decisive he was for the team". Napoli's former president – Corrado Ferlaino – commented on Maradona's leadership qualities during his time with the club in 2008, describing him as "a coach on the pitch." One of Maradona's trademark moves was dribbling full-speed on the right wing, and on reaching the opponent's goal line, delivering accurate passes to his teammates. Another trademark was the "rabona", a reverse-cross pass shot behind the leg that holds all the weight. This maneuver led to several assists, such as the cross for Ramón Díaz's header against Switzerland in 1980. He was also a dangerous free kick and penalty kick taker, who was renowned for his ability to bend the ball from corners and direct set pieces. Regarded as one of he best dead–ball specialists of all time, his free kick technique, which often saw him raise his knee at a high angle when striking the ball, thus enabling him to lift it high over the wall, allowed him to score free kicks even from close range, within 22 to 17 yards (20 to 16 metres) from the goal, or even just outside the penalty area. His style of taking free kicks influenced several other specialists, including Gianfranco Zola, Andrea Pirlo, and Lionel Messi. Maradona was famous for his cunning personality. Inherent within his nickname "El Pibe de Oro" ("Golden Boy") is a sense of mischief, with "pibe" being an anti-establishment rogue, street smart and full of guile. Some critics view his controversial "Hand of God" goal at the 1986 World Cup as a clever maneuver, with one of the opposition players, Glenn Hoddle, admitting that Maradona had disguised it by flicking his head at the same time as palming the ball. The goal itself has been viewed as an embodiment of the Buenos Aires shanty town Maradona was brought up in and its concept of "viveza criolla" — "native cunning". While critical of the illegitimate first goal, England striker Gary Lineker conceded, "When Diego scored that second goal against us, I felt like applauding. I'd never felt like that before, but it's true... and not just because it was such an important game. It was impossible to score such a beautiful goal. He's the greatest player of all time, by a long way. A genuine phenomenon." Maradona used his hand in the 1990 World Cup, again without punishment, and this time on his own goal line, to prevent the Soviet Union from scoring. A number of publications have referred to Maradona as the Artful Dodger, the urchin pickpocket from Charles Dickens' "Oliver Twist". Maradona was dominantly left-footed, often using his left foot even when the ball was positioned more suitably for a right-footed connection. His first goal against Belgium in the 1986 World Cup semi-final is a worthy indicator of such; he had run into the inside right channel to receive a pass but let the ball travel across to his left foot, requiring more technical ability. During his run past several England players in the previous round for the "Goal of the Century" he did not use his right foot once, despite spending the whole movement on the right-hand side of the pitch. In the 1990 World Cup second round tie against Brazil, he did use his right foot to set up the winning goal for Claudio Caniggia due to two Brazilian markers forcing him into a position that made use of his left foot less practical. Regarded as the best player of his generation, as well as one of the greatest players of all time by several pundits, players, and managers, and by some as the best player ever, Maradona is renowned as one of the most skilful players in the history of football, as well as being considered one of the greatest dribblers and free kick takers in the history of the sport. Considered to be a precocious talent in his youth, in addition to his playing ability, Maradona also drew praise from his former manager Menotti for his dedication, determination, and the work-ethic he demonstrated in order to improve the technical aspect of his game in training, despite his natural gifts, with the manager noting: "I'm always cautious about using the word 'genius'. I find it hard to apply that even to Mozart. The beauty of Diego's game has a hereditary element – his natural ease with the ball – but it also owes a lot to his ability to learn: a lot of those brushstrokes, those strokes of 'genius', are in fact a product of his hard work. Diego worked very hard to be the best." Maradona's former Napoli manager – Ottavio Bianchi – also praised his discipline in training, commenting: "Diego is different to the one that they depict. When you got him on his own he was a very good kid. It was beautiful to watch him and coach him. They all speak of the fact that he did not train, but it was not true because Diego was the last person to leave the pitch, it was necessary to send him away because otherwise he would stay for hours to invent free kicks." However, although, as Bianchi noted, Maradona was known for making "great plays" and doing "unimaginable" and "incredible things" with the ball during training sessions, and would even go through periods of rigorous exercise, he was equally known for his limited work-rate in training without the ball, and even gained a degree of infamy during his time in Italy for missing training sessions with Napoli, while he often trained independently instead of with his team. In a 2019 documentary film on his life, Maradona confessed that his weekly regime consisted of "playing a game on Sunday, going out until Wednesday, then hitting the gym on Thursday." Regarding his inconsistent training regimen, the film's director, Asif Kapadia, commented in 2020: "He had a metabolism. He would look so incredibly out of shape, but then he’d train like crazy and sweat it off by the time matchday came along. His body shape just didn’t look like a footballer, but then he had this ability and this balance. He had a way of being, and that idea of talking to him honestly about how a typical week transpired was pretty amazing." He also revealed that Maradona was ahead of his time in the fact that he had a personal fitness coach – Fernando Signorini – who trained him in a variety of areas, in addition to looking after his physical conditioning, adding: "While he [Maradona] was in a football team he had his own regime. How many players would do that? How many players would even know to think like that? 'I’m different to anyone else so I need to train at what I’m good at and what I’m weak at.' Signorini is very well read and very intelligent. He would literally say, 'This is the way I’m going to train you, read this book.' He would help him psychologically, talk to him about philosophy, and things like that." Moreover, Maradona was notorious for his poor diet and extreme lifestyle off the pitch, including his use of illicit drugs and alcohol abuse, which along with personal issues, his metabolism, medication that he was prescribed, and periods of inactivity due to injuries and suspensions, led to his significant weight–gain and physical decline as his career progressed; his lack of dispcipline and difficulties in his turbulent personal life are thought by some in the sport to have negatively impacted his performances and longevity in the later years of his playing career. A controversial footballing figure, while he earned critical acclaim from players, pundits, and managers over his playing style, he also drew criticism in the media for his temper and confrontational behaviour, both on and off the pitch. However, in 2005, Paolo Maldini, described Maradona both as the greatest player he ever faced, and also as the most honest, stating: "He was a model of good behaviour on the pitch – he was respectful of everyone, from the great players down to the ordinary team member. He was always getting kicked around and he never complained – not like some of today's strikers." His former defensive club and international teammate, Baresi, stated when he was asked who was his greatest opponent: "Maradona; when he was on form, there was almost no way of stopping him," while fellow former Italy defender Giuseppe Bergomi described Maradona as the greatest player of all time in 2018. In 1999, Maradona was placed second behind Pelé' by "World Soccer" in the magazine's list of the "100 Greatest Players of the 20th Century." Along with Pelé, Maradona was one of the two joint winners of the "FIFA Player of the Century" award in 2000, and also placed fifth in "IFFHS' Century Elections." In a 2014 FIFA poll, Maradona was voted the second-greatest number 10 of all-time, behind only Pelé, and later that year, was ranked second in "The Guardian"'s list of the 100 greatest World Cup players of all-time, ahead of the 2014 World Cup in Brazil, once again behind Pelé. In 2017, "FourFourTwo" ranked him in first place in their list of "100 greatest players," while in 2018, he was ranked in first place by the same magazine in their list of the "Greatest Football Players in World Cup History"; in March 2020, he was also ranked first by Jack Gallagher of "90min.com" in their list of "Top 50 Greatest Players of All Time." In May 2020, "Sky Sports" ranked him as the best player ever never to have won the Champions League or European Cup. Hounded for years by the press, Maradona once fired a compressed-air rifle at reporters who he claimed were invading his privacy. This quote from former teammate Jorge Valdano summarizes the feelings of many: In 1990, the Konex Foundation from Argentina granted him the Diamond Konex Award, one of the most prestigious culture awards in Argentina, as the most important personality in Sports in the last decade in his country. In 2000, Maradona published his autobiography "Yo Soy El Diego" ("I am "The Diego""), which became a bestseller in Argentina. Two years later, Maradona donated the Cuban royalties of his book to "the Cuban people and Fidel". In 2000, he won FIFA Player of the Century award which was to be decided by votes on their official website, their official magazine and a grand jury. Maradona won the Internet-based poll, garnering 53.6% of the votes against 18.53% for Pelé. In spite of this, and shortly before the ceremony, FIFA added a second award and appointed a "Football Family" committee composed of football journalists that also gave to Pelé the title of best player of the century to make it a draw. Maradona also came fifth in the vote of the IFFHS (International Federation of Football History and Statistics). In 2001, the Argentine Football Association (AFA) asked FIFA for authorization to retire the jersey number 10 for Maradona. FIFA did not grant the request, even though Argentine officials have maintained that FIFA hinted that it would. Maradona has topped a number of fan polls, including a 2002 FIFA poll in which his second goal against England was chosen as the best goal ever scored in a World Cup; he also won the most votes in a poll to determine the All-Time Ultimate World Cup Team. On 22 March 2010, Maradona was chosen number 1 in The Greatest 10 World Cup players of all time by the London-based newspaper "The Times". Argentinos Juniors named its stadium after Maradona on 26 December 2003. In 2003, Maradona was employed by the Libyan footballer Al-Saadi Gaddafi, the third son of Colonel Muammar Gaddafi, as a "technical consultant", while Al-Saadi was playing for the Italian club, Perugia, which was playing in Serie A at the time. On 22 June 2005, it was announced that Maradona would return to former club Boca Juniors as a sports vice president in charge of managing the First Division roster (after a disappointing 2004–05 season, which coincided with Boca's centenary). His contract began 1 August 2005, and one of his first recommendations proved to be very effective: advising the club to hire Alfio Basile as the new coach. With Maradona fostering a close relationship with the players, Boca won the 2005 Apertura, the 2006 Clausura, the 2005 Copa Sudamericana and the 2005 Recopa Sudamericana. On 15 August 2005, Maradona made his debut as host of a talk-variety show on Argentine television, La Noche del 10 ("The Night of the no. 10"). His main guest on opening night was Pelé; the two had a friendly chat, showing no signs of past differences. However, the show also included a cartoon villain with a clear physical resemblance to Pelé. In subsequent evenings, he led the ratings on all occasions but one. Most guests were drawn from the worlds of football and show business, including Ronaldo and Zinedine Zidane, but also included interviews with other notable friends and personalities such as Cuban leader Fidel Castro and boxers Roberto Durán and Mike Tyson. Maradona gave each of his guests a signed Argentina jersey, which Tyson wore when he arrived in Brazil, Argentina's biggest rivals. In November 2005, however, Maradona rejected an offer to work with Argentina's national football team. In May 2006, Maradona agreed to take part in UK's Soccer Aid (a program to raise money for Unicef). In September 2006, Maradona, in his famous blue and white number 10, was the captain for Argentina in a three-day World Cup of Indoor Football tournament in Spain. On 26 August 2006, it was announced that Maradona was quitting his position in the club Boca Juniors because of disagreements with the AFA, who selected Alfio Basile to be the new coach of the Argentina national team. In 2008, award-winning Serbian filmmaker Emir Kusturica made a documentary about Maradona's life, entitled "Maradona". On 1 September 2014, Maradona, along with many current and former footballing stars, took part in the "Match for Peace", which was played at the Stadio Olimpico in Rome, with the proceeds being donated entirely to charity. Maradona set up a goal for Roberto Baggio during the first half of the match, with a chipped through-ball over the defence with the outside of his left foot. Unusually, both Baggio and Maradona wore the number 10 shirt, despite playing on the same team. On 17 August 2015, Maradona visited Ali Bin Nasser, the Tunisian referee of the Argentina–England quarter-final match at the 1986 World Cup where Maradona scored his Hand of God, and paid tribute to him by giving him a signed Argentine jersey. Maradona began his managerial career alongside former Argentinos Juniors midfield teammate Carlos Fren. The pair led Mandiyú of Corrientes in 1994 and Racing Club in 1995, with little success. In May 2011 he became manager of Dubai club Al Wasl FC in the United Arab Emirates. Maradona was sacked on 10 July 2012. In August 2013, Maradona moved on to become mental coach at Argentine club Deportivo Riestra. Maradona departed this role in 2017 to become the head coach of Fujairah, in the UAE second division, before leaving at the end of the season upon failure to secure promotion at the club. In September 2018 he was appointed manager of Mexican second division side Dorados. He made his debut with Dorados on 17 September 2018 with a 4–1 victory over Cafetaleros de Tapachula. On 13 June 2019, after Dorados failed to clinch promotion to the Mexican top flight, Maradona's lawyer announced that he would be stepping down from the role, citing health reasons. On 5 September 2019, Maradona was unveiled as the new head coach of Gimnasia de La Plata, signing a contract until the end of the season. After two months in charge he left the club on 19 November. However, two days later, Maradona rejoined the club as manager saying that "we finally achieved political unity in the club". Maradona insisted that Gabriel Pellegrino remain club President if he were to stay with Gimnisia de La Plata. However it was still not clear if Pellegrino, who declined to run for re-election, would stay on as club President. Originally scheduled to be held on 23 November 2019, the election was delayed 15 days. On 15 December 2019, Pellegrino, who was encouraged by Maradona to seek re-election, was re-elected to a three-year term. Despite having a bad record during the 2019–20 season, Gimnasia renewed Maradona's contract on 3 June 2020 through the 2020–21 season. After the resignation of Argentina national team coach Alfio Basile in 2008, Maradona immediately proposed his candidacy for the vacant role. According to several press sources, his major challengers included Diego Simeone, Carlos Bianchi, Miguel Ángel Russo and Sergio Batista. On 29 October 2008, AFA chairman Julio Grondona confirmed that Maradona would be the head coach of the national team. On 19 November 2008, Maradona managed Argentina for the first time when they played against Scotland at Hampden Park in Glasgow, which Argentina won 1–0. After winning his first three matches in charge of the national team, he oversaw a 6–1 defeat to Bolivia, equalling the team's worst ever margin of defeat. With two matches remaining in the qualification tournament for the 2010 World Cup, Argentina was in fifth place and faced the possibility of failing to qualify, but victory in the last two matches secured qualification for the finals. After Argentina's qualification, Maradona used abusive language at the live post-game press conference, telling members of the media to "suck it and keep on sucking it". FIFA responded with a two-month ban on all footballing activity, which expired on 15 January 2010, and a CHF 25,000 fine, with a warning as to his future conduct. The friendly match scheduled to take place at home to the Czech Republic on 15 December, during the period of the ban, was cancelled. The only match Argentina played during Maradona's ban was a friendly away to Catalonia, which they lost 4–2. At the World Cup finals in June 2010, Argentina started by winning 1–0 against Nigeria, followed by a 4–1 victory over South Korea on the strength of a Gonzalo Higuaín hat-trick. In the final match of the group stage, Argentina won 2–0 against Greece to win the group and advance to a second round, meeting Mexico. After defeating Mexico 3–1, however, Argentina was routed by Germany 4–0 in the quarter-finals to go out of the competition. Argentina was ranked fifth in the tournament. After the defeat to Germany, Maradona admitted that he was considering his future as Argentina coach, stating, "I may leave tomorrow." On 15 July 2010, the AFA said that he would be offered a new four-year deal that would keep him in charge through to the summer of 2014 when Brazil stages the World Cup. On 27 July, however, the AFA announced that its board had unanimously decided not to renew his contract, different to 1978 World Cup winning captain and 1986 teammate, Daniel Passarella. Afterwards, on 29 July, Maradona claimed that AFA president Julio Grondona and director of national teams (as well as his former Argentine national team and Sevilla coach) Carlos Bilardo had "lied to", "betrayed" and effectively sacked him from the role. He said, "They wanted me to continue, but seven of my staff should not go on, if he told me that, it meant he did not want me to keep working." Born to a Roman Catholic family, his parents are Diego Maradona Senior and Dalma Salvadora Franco. Maradona married long-time fiancée Claudia Villafañe on 7 November 1984 in Buenos Aires, and they had two daughters, Dalma Nerea (born 2 April 1987) and Gianinna Dinorah (born 16 May 1989), by whom he became a grandfather in 2009. Maradona and Villafañe divorced in 2004. Daughter Dalma has since asserted that the divorce was the best solution for all, as her parents remained on friendly terms. They travelled together to Naples for a series of homages in June 2005 and were seen together on other occasions, including the Argentina games during 2006 World Cup. During the divorce proceedings, Maradona admitted he is the father of Diego Sinagra (born in Naples on 20 September 1986). The Italian courts had already ruled so in 1993, after Maradona refused to undergo DNA tests to prove or disprove his paternity. Diego Junior met Maradona for the first time in May 2003 after tricking his way onto a golf course in Italy where Maradona was playing. Sinagra is now a footballer playing in Italy. After the divorce, Claudia embarked on a career as a theatre producer, and Dalma was seeking an acting career; she had expressed her desire to attend the Actor's Studio in Los Angeles. Maradona's relationship with his immediate family was a close one, and in a 1990 interview with "Sports Illustrated" he showed phone bills where he had spent a minimum of 15,000 US dollars per month calling his parents and siblings. Maradona's mother, Dalma, died on 19 November 2011. He was in Dubai at the time, and desperately tried to fly back in time to see her, but was too late. She was 81 years old. His father, "Don" Diego, died on 25 June 2015 at age 87. Maradona's great-nephew, Hernán, is a professional footballer. From the mid-1980s until 2004, Maradona was addicted to cocaine. He allegedly began using the drug in Barcelona in 1983. By the time he was playing for Napoli, he had a regular addiction, which began to interfere with his ability to play football. Maradona has a tendency to put on weight and suffered increasingly from obesity, at one point weighing . He was obese from the end of his playing career until undergoing gastric bypass surgery in a clinic in Cartagena de Indias, Colombia, on 6 March 2005. His surgeon said that Maradona would follow a liquid diet for three months in order to return his normal weight. When Maradona resumed public appearances shortly thereafter, he displayed a notably thinner figure. On 29 March 2007, Maradona was readmitted to a hospital in Buenos Aires. He was treated for hepatitis and effects of alcohol abuse and was released on 11 April, but readmitted two days later. In the following days, there were constant rumors about his health, including three false claims of his death within a month. After transfer to a psychiatric clinic specialising in alcohol-related problems, he was discharged on 7 May. On 8 May 2007, Maradona appeared on Argentine television and stated that he had quit drinking and had not used drugs in two-and-a-half years. In January 2019, Maradona underwent surgery after a hernia caused internal bleeding in his stomach. Having previously been vocal in his support of neoliberal Argentine President Carlos Menem and his Harvard University-educated economist Domingo Cavallo, Maradona has shown sympathy to left-wing ideologies. He became friends with Cuban leader Fidel Castro while receiving treatment on the island, with Castro stating, "Diego is a great friend and very noble, too. There's also no question he’s a wonderful athlete and has maintained a friendship with Cuba to no material gain of his own." He has a portrait of Castro tattooed on his left leg and one of Fidel's second in command, fellow Argentine Che Guevara on his right arm. In his autobiography, "El Diego", he dedicated the book to various people, including Castro. He wrote, "To Fidel Castro and, through him, all the Cuban people." Maradona was also a supporter of former Venezuelan President Hugo Chávez. In 2005, he came to Venezuela to meet Chávez, who received him in the Miraflores Palace. After this meeting, Maradona claimed that he had come with the aim of meeting a "great man" (""un grande"" in Spanish), but he had met instead a gigantic man (""un gigante"" in Spanish, meaning he was more than great). "I believe in Chávez, I am Chavista. Everything Fidel does, everything Chávez does, for me is the best." Maradona was the guest of honor of Chávez at the opening game of the 2007 Copa América held in Venezuela. Maradona has declared his opposition to what he identifies as imperialism, notably during the 2005 Summit of the Americas in Mar del Plata, Argentina. There he protested George W. Bush's presence in Argentina, wearing a T-shirt labeled "" (with the "s" in "Bush" being replaced with a swastika) and referring to Bush as "human garbage". In August 2007, Maradona went further, making an appearance on Chávez's weekly television show "Alo Presidente" and saying, "I hate everything that comes from the United States. I hate it with all my strength." In December 2008, however, Maradona had adopted a more pro-US attitude when he expressed admiration for Bush's successor, President-elect Barack Obama, and held great expectations for him. With his poor shanty town upbringing, Maradona has cultivated a man of the people persona. During a meeting with Pope John Paul II at the Vatican in 1987, they clashed on the issue of wealth disparity, with Maradona stating, "I argued with him because I was in the Vatican and I saw all these golden ceilings and afterwards I heard the Pope say the Church was worried about the welfare of poor kids. Sell your ceiling then amigo, do something!" In September 2014, Maradona met with Pope Francis in Rome, crediting Francis for inspiring him to return to religion after many years; he stated, "We should all imitate Pope Francis. If each one of us gives something to someone else, no one in the world would be starving." In December 2007, Maradona presented a signed shirt with a message of support to the people of Iran: it is displayed in the Iranian Ministry of Foreign Affairs' museum. In April 2013, Maradona visited the tomb of Hugo Chávez and urged Venezuelans to elect the late leader's designated successor, Nicolás Maduro, to continue the socialist leader's legacy; "Continue the struggle," Maradona said on television. Maradona attended Maduro's final campaign rally in Caracas, signing footballs and kicking them to the crowd, and presented Maduro with an Argentina jersey. Having visited Chávez's tomb with Maradona, Maduro said, "Speaking with Diego was very emotional because comandante Chávez also loved him very much." Maradona participated and danced at the electoral campaign rally during the 2018 presidential elections in Venezuela. During the 2019 Venezuelan presidential crisis, the Mexican Football Federation fined him for violating their code of ethics and dedicating a team victory to Nicolás Maduro. In October 2015, Maradona thanked Queen Elizabeth II and the Houses of Parliament in London for giving him the chance to provide "true justice" as head of an organisation designed to help young children. In a video released on his official Facebook page, Maradona confirmed he would accept their nomination for him to become Latin American director for the non-governmental organisation Football for Unity. Maradona habitually refers to himself in the third person as "Maradona" and "El Diego". In March 2009, Italian officials announced that Maradona still owed the Italian government €37 million in local taxes; €23.5 million of which was accrued interest on his original debt. They reported that thus far, Maradona had paid only €42,000, two luxury watches and a set of earrings. The American newspaper "The Houston Chronicle" wrote about Maradona: In Argentina, Maradona is considered a sports hero. On the idolatry that exists in Argentina, former teammate Jorge Valdano said, "At the time that Maradona retired from active football, left traumatized Argentina. Maradona was more than just a great footballer. It was a special compensation factor for a country that in a few years lived several military dictatorships and social frustrations of all kinds". Valdano added that "Maradona offered to the Argentines a way out of their collective frustration, and that's why people love him. There is a divine figure." Ever since 1986, it is common for Argentines abroad to hear Maradona's name as a token of recognition, even in remote places. The Tartan Army sing a version of the Hokey Cokey in honour of the Hand of God goal against England. In Argentina, Maradona is often talked about in terms reserved for legends. In the Argentine film "El Hijo de la Novia" ("Son of the Bride"), somebody who impersonates a Catholic priest says to a bar patron, "They idolized him and then crucified him." When a friend scolds him for taking the prank too far, the fake priest retorts, "But I was talking about Maradona." He is the subject of the film "El Camino de San Diego", though he himself only appears in archive footage. Maradona was included in many cameos in the Argentine comic book El Cazador de Aventuras. After the closing of it, the authors started a new short-lived comic book titled "El Die", using Maradona as the main character. Maradona has had several online Flash games that are entirely dedicated to his legacy. In Rosario, Argentina, locals organized the parody religion of the "Church of Maradona". The organization reformulates many elements from Christian tradition, such as Christmas or prayers, reflecting instead details from Maradona. It had 200 founding members, and tens of thousands more have become members via the church's official web site. Many Argentine artists performed songs in tribute to Diego, such as "La Mano de Dios" by El Potro Rodrigo, "Maradona" by Andrés Calamaro, "Para siempre Diego" (Diego forever) by Los Ratones Paranoicos, "Francotirador" (Sniper) by Attaque 77, "Maradona blues" by Charly García, "Santa Maradona" (Saint Maradona) by Mano Negra, "La Vida Tombola" by Manu Chao, among others. There are also films, such as: "Maradona, La Mano de Dios" (Maradona, the Hand of God), "El Camino de San Diego" (Saint Diego's Road), "Amando a Maradona" (Loving Maradona), "Maradona by Kusturica". By 1982, Maradona had become one of the biggest sports stars in the world and had endorsements with many companies, including Puma and Coca-Cola, earning him an additional $1.5 million per year on top of his club salary. In 1982, he featured in a World Cup commercial for Coca-Cola, and a Japanese commercial for Puma. In 2010 he appeared in a commercial for French fashion house Louis Vuitton, indulging in a game of table football with fellow World Cup winners Pelé and Zinedine Zidane. Maradona features in the music video to the 2010 World Cup song "Waka Waka" by Shakira, with footage shown of him celebrating Argentina winning the 1986 World Cup. A 2006 television commercial for Brazilian soft drink Guaraná Antarctica portrayed Maradona as a member of the Brazil national team, including wearing the yellow jersey and singing the Brazilian national anthem with Brazilian players Ronaldo and Kaká. Later on in the commercial he wakes up realizing it was a nightmare after having drunk too much of the drink. This generated some controversy in the Argentine media after its release (although the commercial was not supposed to air on the Argentine market, fans could see it online). Maradona replied that he has no problem in wearing the Brazilian national squad jersey despite Argentina and Brazil having a tense rivalry in football, but that he would refuse to wear the shirt of River Plate, Boca Juniors' traditional rival. There is a documented phenomenon of Brazilians being named in honour of Maradona, an example being footballer Diego Costa. In 2017, Maradona featured as a legendary player in the football video games "FIFA 18" and "Pro Evolution Soccer 2018". In 2019, a documentary film titled "Diego Maradona" was released by Academy Award and BAFTA Award winning filmmaker Asif Kapadia, director of "Amy" (on singer Amy Winehouse) and "Senna" (on motor racing driver Ayrton Senna). Kapadia states, "Maradona is the third part of a trilogy about child geniuses and fame." He added, "I was fascinated by his journey, wherever he went there were moments of incredible brilliance and drama. He was a leader, taking his teams to the very top, but also many lows in his career. He was always the little guy fighting against the system... and he was willing to do anything, to use all of his cunning and intelligence to win." Boca Juniors Barcelona Napoli Argentina Youth Argentina
https://en.wikipedia.org/wiki?curid=8485
Chloramphenicol Chloramphenicol is an antibiotic useful for the treatment of a number of bacterial infections. This includes use as an eye ointment to treat conjunctivitis. By mouth or by injection into a vein, it is used to treat meningitis, plague, cholera, and typhoid fever. Its use by mouth or by injection is only recommended when safer antibiotics cannot be used. Monitoring both blood levels of the medication and blood cell levels every two days is recommended during treatment. Common side effects include bone marrow suppression, nausea, and diarrhea. The bone marrow suppression may result in death. To reduce the risk of side effects treatment duration should be as short as possible. People with liver or kidney problems may need lower doses. In young children a condition known as gray baby syndrome may occur which results in a swollen stomach and low blood pressure. Its use near the end of pregnancy and during breastfeeding is typically not recommended. Chloramphenicol is a broad-spectrum antibiotic that typically stops bacterial growth by stopping the production of proteins. Chloramphenicol was discovered after being isolated from "Streptomyces venezuelae" in 1947. Its chemical structure was identified and it was first artificially made in 1949, making it the first antibiotic to be made instead of extracted from a micro-organism. It is on the World Health Organization's List of Essential Medicines, the safest and most effective medicines needed in a health system. It is available as a generic medication. The wholesale cost in the developing world of an intravenous dose is about US$0.40–1.90. In the United States an intravenous dose costs about $41.47. The original indication of chloramphenicol was in the treatment of typhoid, but the now almost universal presence of multiple drug-resistant "Salmonella typhi" has meant it is seldom used for this indication except when the organism is known to be sensitive. In low-income countries, the WHO no longer recommends oily chloramphenicol as first-line to treat meningitis, but recognises it may be used with caution if there are no available alternatives. In the context of preventing endophthalmitis, a complication of cataract surgery, a 2017 systematic review found moderate evidence that using chloramphenicol eye drops in addition to an antibiotic injection (cefuroxime or penicillin) will likely lower the risk of endophthalmitis, compared to eye drops or antibiotic injections alone. Chloramphenicol has a broad spectrum of activity and has been effective in treating ocular infections such as conjunctivitis, blepharitis etc. caused by a number of bacteria including "Staphylococcus aureus, Streptococcus pneumoniae", and "Escherichia coli". It is not effective against "Pseudomonas aeruginosa". The following susceptibility data represent the minimum inhibitory concentration for a few medically significant organisms. Each of these concentrations is dependent upon the bacterial strain being targeted. Some strains of "E. coli", for example, show spontaneous emergence of chloramphenicol resistance. Three mechanisms of resistance to chloramphenicol are known: reduced membrane permeability, mutation of the 50S ribosomal subunit, and elaboration of chloramphenicol acetyltransferase. It is easy to select for reduced membrane permeability to chloramphenicol "in vitro" by serial passage of bacteria, and this is the most common mechanism of low-level chloramphenicol resistance. High-level resistance is conferred by the "cat"-gene; this gene codes for an enzyme called chloramphenicol acetyltransferase, which inactivates chloramphenicol by covalently linking one or two acetyl groups, derived from acetyl-"S"-coenzyme A, to the hydroxyl groups on the chloramphenicol molecule. The acetylation prevents chloramphenicol from binding to the ribosome. Resistance-conferring mutations of the 50S ribosomal subunit are rare. Chloramphenicol resistance may be carried on a plasmid that also codes for resistance to other drugs. One example is the ACCoT plasmid (A=ampicillin, C=chloramphenicol, Co=co-trimoxazole, T=tetracycline), which mediates multiple drug resistance in typhoid (also called R factors). As of 2014 some "Enterococcus faecium" and" Pseudomonas aeruginosa" strains are resistant to chloramphenicol. Some "Veillonella" spp. and "Staphylococcus capitis" strains have also developed resistance to chloramphenicol to varying degrees. The most serious side effect of chloramphenicol treatment is aplastic anaemia. This effect is rare and sometimes fatal. The risk of AA is high enough that alternatives should be strongly considered. Treatments are available but expensive. No way exists to predict who may or may not get this side effect. The effect usually occurs weeks or months after treatment has been stopped, and a genetic predisposition may be involved. It is not known whether monitoring the blood counts of patients can prevent the development of aplastic anaemia, but patients are recommended to have a baseline blood count with a repeat blood count every few days while on treatment. Chloramphenicol should be discontinued if the complete blood count drops. The highest risk is with oral chloramphenicol (affecting 1 in 24,000–40,000) and the lowest risk occurs with eye drops (affecting less than one in 224,716 prescriptions). Thiamphenicol, a related compound with a similar spectrum of activity, is available in Italy and China for human use, and has never been associated with aplastic anaemia. Thiamphenicol is available in the U.S. and Europe as a veterinary antibiotic, but is not approved for use in humans. Chloramphenicol may cause bone marrow suppression during treatment; this is a direct toxic effect of the drug on human mitochondria. This effect manifests first as a fall in hemoglobin levels, which occurs quite predictably once a cumulative dose of 20 g has been given. The anaemia is fully reversible once the drug is stopped and does not predict future development of aplastic anaemia. Studies in mice have suggested existing marrow damage may compound any marrow damage resulting from the toxic effects of chloramphenicol. Leukemia, a cancer of the blood or bone marrow, is characterized by an abnormal increase of immature white blood cells. The risk of childhood leukemia is increased, as demonstrated in a Chinese case–control study, and the risk increases with length of treatment. Intravenous chloramphenicol use has been associated with the so-called gray baby syndrome. This phenomenon occurs in newborn infants because they do not yet have fully functional liver enzymes (i.e. UDP-glucuronyl transferase), so chloramphenicol remains unmetabolized in the body. This causes several adverse effects, including hypotension and cyanosis. The condition can be prevented by using the drug at the recommended doses, and monitoring blood levels. Fever, macular and vesicular rashes, angioedema, urticaria, and anaphylaxis may occur. Herxheimer's reactions have occurred during therapy for typhoid fever. Headache, mild depression, mental confusion, and delirium have been described in patients receiving chloramphenicol. Optic and peripheral neuritis have been reported, usually following long-term therapy. If this occurs, the drug should be promptly withdrawn. Chloramphenicol is extremely lipid-soluble; it remains relatively unbound to protein and is a small molecule. It has a large apparent volume of distribution and penetrates effectively into all tissues of the body, including the brain. Distribution is not uniform, with highest concentrations found in the liver and kidney, with lowest in the brain and cerebrospinal fluid. The concentration achieved in brain and cerebrospinal fluid is around 30 to 50% of the overall average body concentration, even when the meninges are not inflamed; this increases to as high as 89% when the meninges are inflamed. Chloramphenicol increases the absorption of iron. Chloramphenicol is metabolized by the liver to chloramphenicol glucuronate (which is inactive). In liver impairment, the dose of chloramphenicol must therefore be reduced. No standard dose reduction exists for chloramphenicol in liver impairment, and the dose should be adjusted according to measured plasma concentrations. The majority of the chloramphenicol dose is excreted by the kidneys as the inactive metabolite, chloramphenicol glucuronate. Only a tiny fraction of the chloramphenicol is excreted by the kidneys unchanged. Plasma levels should be monitored in patients with renal impairment, but this is not mandatory. Chloramphenicol succinate ester (an intravenous prodrug form) is readily excreted unchanged by the kidneys, more so than chloramphenicol base, and this is the major reason why levels of chloramphenicol in the blood are much lower when given intravenously than orally. Chloramphenicol passes into breast milk, so should therefore be avoided during breast feeding, if possible. Plasma levels of chloramphenicol must be monitored in neonates and patients with abnormal liver function. Plasma levels should be monitored in all children under the age of four, the elderly, and patients with kidney failure. Because efficacy and toxicity of chloramphenicol are associated with a maximum serum concentration, peak levels (one hour after the intravenous dose is given) should be 10–20 µg/ml with toxicity ; trough levels (taken immediately before a dose) should be 5–10 µg/ml. Administration of chloramphenicol concomitantly with bone marrow depressant drugs is contraindicated, although concerns over aplastic anaemia associated with ocular chloramphenicol have largely been discounted. Chloramphenicol is a potent inhibitor of the cytochrome P450 isoforms CYP2C19 and CYP3A4 in the liver. Inhibition of CYP2C19 causes decreased metabolism and therefore increased levels of, for example, antidepressants, antiepileptics, proton-pump inhibitors, and anticoagulants if they are given concomitantly. Inhibition of CYP3A4 causes increased levels of, for example, calcium channel blockers, immunosuppressants, chemotherapeutic drugs, benzodiazepines, azole antifungals, tricyclic antidepressants, macrolide antibiotics, SSRIs, statins, cardiac antiarrhythmics, antivirals, anticoagulants, and PDE5 inhibitors. Chloramphenicol is antagonistic with most cephalosporins and using both together should be avoided in the treatment of infections. Chloramphenicol is a bacteriostatic by inhibiting protein synthesis. It prevents protein chain elongation by inhibiting the peptidyl transferase activity of the bacterial ribosome. It specifically binds to A2451 and A2452 residues in the 23S rRNA of the 50S ribosomal subunit, preventing peptide bond formation. Chloramphenicol directly interferes with substrate binding in the ribosome, as compared to macrolides, which sterically block the progression of the growing peptide. Chloramphenicol was first isolated from "Streptomyces venezuelae" in 1947 and in 1949 a team of scientists at Parke-Davis including Mildred Rebstock published their identification of the chemical structure and their synthesis, making it the first antibiotic to be made instead of extracted from a micro-organism. In 2007, the accumulation of reports associating aplastic anemia and blood dyscrasia with chloramphenicol eye drops lead to the classification of “probable human carcinogen” according to World Health Organization criteria, based on the known published case reports and the spontaneous reports submitted to the National Registry of Drug-Induced Ocular Side Effects. In many areas of the world an intravenous dose is about US$0.40–1.90. In the United States it costs about $3.60 per dose in oral tablet form at wholesale. Chloramphenicol is available as a generic worldwide under many brandnames and also under various generic names in eastern Europe and Russia, including chlornitromycin, levomycetin, and chloromycetin; the racemate is known as synthomycetin. Chloramphenicol is available as a capsule or as a liquid. In some countries, it is sold as chloramphenicol palmitate ester (CPE). CPE is inactive, and is hydrolysed to active chloramphenicol in the small intestine. No difference in bioavailability is noted between chloramphenicol and CPE. Manufacture of oral chloramphenicol in the U.S. stopped in 1991, because the vast majority of chloramphenicol-associated cases of aplastic anaemia are associated with the oral preparation. No oral formulation of chloramphenicol is now available in the U.S. In molecular biology, chloramphenicol is prepared in ethanol. The intravenous (IV) preparation of chloramphenicol is the succinate ester. This creates a problem: Chloramphenicol succinate ester is an inactive prodrug and must first be hydrolysed to chloramphenicol; however, the hydrolysis process is often incomplete, and 30% of the dose is lost and removed in the urine. Serum concentrations of IV chloramphenicol are only 70% of those achieved when chloramphenicol is given orally. For this reason, the dose needs to be increased to 75 mg/kg/day when administered IV to achieve levels equivalent to the oral dose. Oily chloramphenicol (or chloramphenicol oil suspension) is a long-acting preparation of chloramphenicol first introduced by Roussel in 1954; marketed as Tifomycine, it was originally used as a treatment for typhoid. Roussel stopped production of oily chloramphenicol in 1995; the International Dispensary Association has manufactured it since 1998, first in Malta and then in India from December 2004. Oily chloramphenicol was first used to treat meningitis in 1975 and numerous studies since have demonstrated its efficacy. It is the cheapest treatment available for meningitis (US$5 per treatment course, compared to US$30 for ampicillin and US$15 for five days of ceftriaxone). It has the great advantage of requiring only a single injection, whereas ceftriaxone is traditionally given daily for five days. This recommendation may yet change, now that a single dose of ceftriaxone (cost US$3) has been shown to be equivalent to one dose of oily chloramphenicol. Chloramphenicol is still used occasionally in topical preparations (ointments and eye drops) for the treatment of bacterial conjunctivitis. Isolated case reports of aplastic anaemia following use of chloramphenicol eyedrops exist, but the risk is estimated to be of the order of less than one in 224,716 prescriptions. In Mexico, this is the treatment used prophylactically in newborns. Although its use in veterinary medicine is highly restricted, chloramphenicol still has some important veterinary uses. It is currently considered the most useful treatment of chlamydial disease in koalas. The pharmacokinetics of chloramphenicol have been investigated in koalas. Although unpublished, recent research suggests chloramphenicol could also be applied to frogs to prevent their widespread destruction from fungal infections. It has recently been discovered to be a life-saving cure for chytridiomycosis in amphibians. Chytridiomycosis is a fungal disease, blamed for the extinction of one-third of the 120 frog species lost since 1980.
https://en.wikipedia.org/wiki?curid=6346
Cut-up technique The cut-up technique (or "découpé" in French) is an aleatory literary technique in which a written text is cut up and rearranged to create a new text. The concept can be traced to at least the Dadaists of the 1920s, but was popularized in the late 1950s and early 1960s by writer William S. Burroughs, and has since been used in a wide variety of contexts. The cut-up and the closely associated fold-in are the two main techniques: William Burroughs cited T. S. Eliot's 1922 poem, "The Waste Land", and John Dos Passos' "U.S.A." trilogy, which incorporated newspaper clippings, as early examples of the cut ups he popularized. Gil J. Wolman developed cut-up techniques as part of his lettrist practice in the early 1950s. Also in the 1950s, painter and writer Brion Gysin more fully developed the cut-up method after accidentally re-discovering it. He had placed layers of newspapers as a mat to protect a tabletop from being scratched while he cut papers with a razor blade. Upon cutting through the newspapers, Gysin noticed that the sliced layers offered interesting juxtapositions of text and image. He began deliberately cutting newspaper articles into sections, which he randomly rearranged. The book "Minutes to Go" resulted from his initial cut-up experiment: unedited and unchanged cut-ups which emerged as coherent and meaningful prose. South African poet Sinclair Beiles also used this technique and co-authored "Minutes To Go". Gysin introduced Burroughs to the technique at the Beat Hotel. The pair later applied the technique to printed media and audio recordings in an effort to decode the material's implicit content, hypothesizing that such a technique could be used to discover the true meaning of a given text. Burroughs also suggested cut-ups may be effective as a form of divination saying, "When you cut into the present the future leaks out." Burroughs also further developed the "fold-in" technique. In 1977, Burroughs and Gysin published "The Third Mind", a collection of cut-up writings and essays on the form. Jeff Nuttall's publication "My Own Mag" was another important outlet for the then-radical technique. In an interview, Alan Burns noted that for "Europe After The Rain" (1965) and subsequent novels he used a version of cut-ups: "I did not actually use scissors, but I folded pages, read across columns, and so on, discovering for myself many of the techniques Burroughs and Gysin describe". Argentine writer Julio Cortázar often used cut ups in his 1963 novel "Hopscotch". In 1969, poets Howard W. Bergerson and J. A. Lindon developed a cut-up technique known as vocabularyclept poetry, in which a poem is formed by taking all the words of an existing poem and rearranging them, often preserving the metre and stanza lengths. A precedent of the technique occurred during a Dadaist rally in the 1920s in which Tristan Tzara offered to create a poem on the spot by pulling words at random from a hat. Collage, which was popularized roughly contemporaneously with the Surrealist movement, sometimes incorporated texts such as newspapers or brochures. Prior to this event, the technique had been published in an issue of 391 in the poem by Tzara, "dada manifesto on feeble love and bitter love" under the sub-title, "TO MAKE A DADAIST POEM". A drama scripted for five voices by performance poet Hedwig Gorski in 1977 originated the idea of creating poetry only for performance instead of for print publication. The "neo-verse drama" titled "Booby, Mama!" written for "guerilla theater" performances in public places used a combination of newspaper cut-ups that were edited and choreographed for a troupe of non-professional street actors. Kathy Acker, a literary and intermedia artist, sampled external sources and reconfigured them into the creation of shifting versions of her own constructed identity. In her late 1970s novel "Blood and Guts in High School", Acker explored literary cut-up and appropriation as an integral part of her method. Antony Balch and Burroughs created a collaboration film, "The Cut-Ups" that opened in London in 1967. This was part of an abandoned project called "Guerrilla Conditions" meant as a documentary on Burroughs and filmed throughout 1961–1965. Inspired by Burroughs' and Gysin's technique of cutting up text and rearranging it in random order, Balch had an editor cut his footage for the documentary into little pieces and impose no control over its reassembly. The film opened at Oxford Street's Cinephone cinema and had a disturbing reaction. Many audience members claimed the film made them ill, others demanded their money back, while some just stumbled out of the cinema ranting "it's disgusting". Other cut-up films include "Ghost at n°9 (Paris)" (1963–72), a posthumously released short film compiled from reels found at Balch's office after his death, and "William Buys a Parrott" (1982), "Bill and Tony" (1972), "Towers Open Fire" (1963) and "The Junky's Christmas" (1966). From the early 1970s, David Bowie used cut-ups to create some of his lyrics. Thom Yorke applied a similar method in Radiohead's "Kid A" (2000) album, writing single lines, putting them into a hat, and drawing them out at random while the band rehearsed the songs. Perhaps indicative of Thom Yorke's influences, instructions for "How to make a Dada poem" appeared on Radiohead's website at this time. Stephen Mallinder of Cabaret Voltaire reported to "Inpress" magazine's Andrez Bergen that "I do think the manipulation of sound in our early days – the physical act of cutting up tapes, creating tape loops and all that – has a strong reference to Burroughs and Gysin." Another industrial music pioneer, Al Jourgensen of Ministry, named Burroughs and his cut-up technique as the most important influence on how he approached the use of samples. A modern example of the cut-up technique used in music is the EP "Fetus" by Patrick Roach.
https://en.wikipedia.org/wiki?curid=6347
Congenital iodine deficiency syndrome Congenital iodine deficiency syndrome is a medical condition present at birth marked by impaired physical and mental development, due to insufficient thyroid hormone (hypothyroidism) often caused by insufficient dietary iodine during pregnancy. It is one cause of underactive thyroid function at birth, called congenital hypothyroidism, and also referred to as "cretinism". If untreated, it results in impairment of both physical and mental development. Symptoms may include goiter, poor length growth in infants, reduced adult stature, thickened skin, hair loss, enlarged tongue, a protruding abdomen; delayed bone maturation and puberty in children; and mental deterioration, neurological impairment, impeded ovulation, and infertility in adults. In developed countries, thyroid function testing of newborns has assured that in those affected, treatment with the thyroid hormone thyroxine is begun promptly. This screening and treatment has virtually eliminated the consequences of the disease. Iodine deficiency causes gradual enlargement of the thyroid gland, referred to as a goiter. Poor length growth is apparent as early as the first year of life. Adult stature without treatment ranges from , depending on severity, sex, and other genetic factors. Other signs include thickened skin, hair loss, enlarged tongue, and a protruding abdomen In children, bone maturation and puberty are severely delayed. In adults, ovulation is impeded and infertility is common. Mental deterioration is common. Neurological impairment may be mild, with reduced muscle tone and coordination, or so severe that the person cannot stand or walk. Cognitive impairment may also range from mild to so severe that the person is nonverbal and dependent on others for basic care. Thought and reflexes are slower. Around the world, the most common cause of congenital hypothyroidism is dietary iodine deficiency. It has affected many people worldwide and continues to be a major public health problem in many countries. Iodine is an essential trace element, necessary for the synthesis of thyroid hormones. Iodine deficiency is the most common preventable cause of neonatal and childhood brain damage worldwide. Although iodine is found in many foods, it is not universally present in all soils in adequate amounts. Most iodine, in iodide form, is in the oceans, where the iodide ions oxidize to elemental iodine, which then enters the atmosphere and falls to earth in rain, introducing iodine to soils. Soil deficient in iodine is most common inland, in mountainous areas, and in areas of frequent flooding. It can also occur in coastal regions, where iodine might have been removed from the soil by glaciation, as well as leaching by snow, water and heavy rainfall. Plants and animals grown in iodine deficient soils are correspondingly deficient. Populations living in those areas without outside food sources are most at risk of iodine deficiency diseases. Dwarfism may also be caused by malnutrition or other hormonal deficiencies, such as insufficient growth hormone secretion, hypopituitarism, decreased secretion of growth hormone-releasing hormone, deficient growth hormone receptor activity and downstream causes, such as insulin-like growth factor 1 (IGF-1) deficiency. There are public health campaigns in many countries which involve iodine administration. As of December 2019, 122 countries have mandatory iodine food fortification programs. Congenital iodine deficiency has been almost completely eliminated in developed countries through iodine supplementation of food and by newborn screening utilizing a blood test for thyroid function. Treatment consists of lifelong administration of thyroxine (T4). Thyroxine must be dosed as tablets only, even to newborns, as the liquid oral suspensions and compounded forms cannot be depended on for reliable dosing. For infants, the T4 tablets are generally crushed and mixed with breast milk, formula milk or water. If the medication is mixed with formulas containing iron or soya products, larger doses may be required, as these substances may alter the absorption of thyroid hormone from the gut. Monitoring TSH blood levels every 2–3 weeks during the first months of life is recommended to ensure that affected infants are at the high end of normal range. A goiter is the most specific clinical marker of either the direct or indirect insufficient intake of iodine in the human body. There is evidence of goiter, and its medical treatment with iodine-rich algae and burnt sponges, in Chinese, Egyptian, and Roman ancient medical texts. In 1848, King Carlo Alberto of Sardinia commissioned the first epidemiological study of congenital iodine deficiency syndrome, in northern Savoy where it was frequent. In past centuries, the well reported social diseases prevalent among the poorer social classes and farmers, caused by dietary and agricultural monocultures, were: pellagra, rickets, beriberi, scurvy in long-term sailors, and the endemic goiter caused by iodine deficiency. However, this disease was less mentioned in medical books because it was erroneously considered to be an aesthetic rather than a clinical disorder. Congenital iodine deficiency syndrome was especially common in areas of southern Europe around the Alps and was often described by ancient Roman writers and depicted by artists. The earliest Alpine mountain climbers sometimes came upon whole villages affected by it. The prevalence of the condition was described from a medical perspective by several travellers and physicians in the late 18th and early 19th centuries. At that time the cause was not known and it was often attributed to "stagnant air" in mountain valleys or "bad water". The proportion of people affected varied markedly throughout southern Europe and even within very small areas it might be common in one valley and not another. The number of severely affected persons was always a minority, and most persons were only affected to the extent of having a goitre and some degree of reduced cognition and growth. The majority of such cases were still socially functional in their pastoral villages. More mildly affected areas of Europe and North America in the 19th century were referred to as "goitre belts". The degree of iodine deficiency was milder and manifested primarily as thyroid enlargement rather than severe mental and physical impairment. In Switzerland, for example, where soil does not contain a large amount of iodine, cases of congenital iodine deficiency syndrome were very abundant and even considered genetically caused. As the variety of food sources dramatically increased in Europe and North America and the populations became less completely dependent on locally grown food, the prevalence of endemic goitre diminished. The early 20th century saw the discovery of the relationships of neurological impairment with hypothyroidism due to iodine deficiency. Both have been largely eliminated in the developed world. The term "cretin" was originally used to describe a person affected by this condition, but, as with words such as "spastic" and "lunatic", it underwent pejoration and is now considered derogatory and inappropriate. "Cretin" became a medical term in the 18th century, from an Occitan and an Alpine French expression, prevalent in a region where persons with such a condition were especially common (see below); it saw wide medical use in the 19th and early 20th centuries, and was a "tick box" category on Victorian-era census forms in the UK. The term spread more widely in popular English as a markedly derogatory term for a person who behaves stupidly. Because of its pejorative connotations in popular speech, health-care workers have mostly abandoned the term "cretin". The etymology of "cretin" is uncertain. Several hypotheses exist. The most common derivation provided in English dictionaries is from the Alpine French dialect pronunciation of the word "Chrétien" ("(a) Christian"), which was a greeting there. According to the "Oxford English Dictionary", the translation of the French term into "human creature" implies that the label "Christian" is a reminder of the humanity of the afflicted, in contrast to brute beasts. Other sources suggest that "Christian" describes the person's "Christ-like" inability to sin, stemming, in such cases, from an incapacity to distinguish right from wrong. Other speculative etymologies have been offered:
https://en.wikipedia.org/wiki?curid=6352
Council of Trent The Council of Trent (), held between 1545 and 1563 in Trent (or Trento, in northern Italy), was the 19th ecumenical council of the Catholic Church. Prompted by the Protestant Reformation, it has been described as the embodiment of the Counter-Reformation. The Council issued condemnations of what it defined to be heresies committed by proponents of Protestantism, and also issued key statements and clarifications of the Church's doctrine and teachings, including scripture, the Biblical canon, sacred tradition, original sin, justification, salvation, the sacraments, the Mass, and the veneration of saints. The Council met for twenty-five sessions between 13 December 1545 and 4 December 1563. Pope Paul III, who convoked the Council, oversaw the first eight sessions (1545–47), while the twelfth to sixteenth sessions (1551–52) were overseen by Pope Julius III and the seventeenth to twenty-fifth sessions (1562–63) by Pope Pius IV. The consequences of the Council were also significant with regard to the Church's liturgy and practices. During its deliberations, the Council made the Vulgate the official example of the Biblical canon and commissioned the creation of a standard version, although this was not achieved until the 1590s. In 1565, a year after the Council finished its work, Pius IV issued the Tridentine Creed (after "Tridentum", Trent's Latin name) and his successor Pius V then issued the Roman Catechism and revisions of the Breviary and Missal in, respectively, 1566, 1568 and 1570. These, in turn, led to the codification of the Tridentine Mass, which remained the Church's primary form of the Mass for the next four hundred years. More than three hundred years passed until the next ecumenical council, the First Vatican Council, was convened in 1869. On 15 March 1517, the Fifth Council of the Lateran closed its activities with a number of reform proposals (on the selection of bishops, taxation, censorship and preaching) but not on the major problems that confronted the Church in Germany and other parts of Europe. A few months later, on 31 October 1517, Martin Luther issued his 95 Theses in Wittenberg. Luther's position on ecumenical councils shifted over time, but in 1520 he appealed to the German princes to oppose the papal Church, if necessary with a council in Germany, open and free of the Papacy. After the Pope condemned in "Exsurge Domine" fifty-two of Luther's theses as heresy, German opinion considered a council the best method to reconcile existing differences. German Catholics, diminished in number, hoped for a council to clarify matters. It took a generation for the council to materialise, partly because of papal reluctance, given that a Lutheran demand was the exclusion of the papacy from the Council, and partly because of ongoing political rivalries between France and Germany and the Turkish dangers in the Mediterranean. Under Pope Clement VII (1523–34), troops of the Catholic Holy Roman Emperor Charles V sacked Papal Rome in 1527, "raping, killing, burning, stealing, the like had not been seen since the Vandals". Saint Peter's Basilica and the Sistine Chapel were used for horses. This, together with the Pontiff's ambivalence between France and Germany, led to his hesitation. Charles V strongly favoured a council, but needed the support of King Francis I of France, who attacked him militarily. Francis I generally opposed a general council due to partial support of the Protestant cause within France. In 1532 he agreed to the Nuremberg Religious Peace granting religious liberty to the Protestants, and in 1533 he further complicated matters when suggesting a general council to include both Catholic and Protestant rulers of Europe that would devise a compromise between the two theological systems. This proposal met the opposition of the Pope for it gave recognition to Protestants and also elevated the secular Princes of Europe above the clergy on church matters. Faced with a Turkish attack, Charles held the support of the Protestant German rulers, all of whom delayed the opening of the Council of Trent. In reply to the Papal bull "Exsurge Domine" of Pope Leo X (1520), Martin Luther burned the document and appealed for a general council. In 1522 German diets joined in the appeal, with Charles V seconding and pressing for a council as a means of reunifying the Church and settling the Reformation controversies. Pope Clement VII (1523–1534) was vehemently against the idea of a council, agreeing with Francis I of France, after Pope Pius II, in his bull "Execrabilis" (1460) and his reply to the University of Cologne (1463), set aside the theory of the supremacy of general councils laid down by the Council of Constance. Pope Paul III (1534–1549), seeing that the Protestant Reformation was no longer confined to a few preachers, but had won over various princes, particularly in Germany, to its ideas, desired a council. Yet when he proposed the idea to his cardinals, it was almost unanimously opposed. Nonetheless, he sent nuncios throughout Europe to propose the idea. Paul III issued a decree for a general council to be held in Mantua, Italy, to begin on 23 May 1537. Martin Luther wrote the Smalcald Articles in preparation for the general council. The Smalcald Articles were designed to sharply define where the Lutherans could and could not compromise. The council was ordered by the Emperor and Pope Paul III to convene in Mantua on 23 May 1537. It failed to convene after another war broke out between France and Charles V, resulting in a non-attendance of French prelates. Protestants refused to attend as well. Financial difficulties in Mantua led the Pope in the autumn of 1537 to move the council to Vicenza, where participation was poor. The Council was postponed indefinitely on 21 May 1539. Pope Paul III then initiated several internal Church reforms while Emperor Charles V convened with Protestants and Cardinal Gasparo Contarini at the Diet of Regensburg, to reconcile differences. Mediating and conciliatory formulations were developed on certain topics. In particular, a two-part doctrine of justification was formulated that would later be rejected at Trent. Unity failed between Catholic and Protestant representatives "because of different concepts of "Church" and "justification"". However, the council was delayed until 1545 and, as it happened, convened right before Luther's death. Unable, however, to resist the urging of Charles V, the pope, after proposing Mantua as the place of meeting, convened the council at Trent (at that time ruled by a prince-bishop under the Holy Roman Empire), on 13 December 1545; the Pope's decision to transfer it to Bologna in March 1547 on the pretext of avoiding a plague failed to take effect and the Council was indefinitely prorogued on 17 September 1549. None of the three popes reigning over the duration of the council ever attended, which had been a condition of Charles V. Papal legates were appointed to represent the Papacy. Reopened at Trent on 1 May 1551 by convocation of Pope Julius III (1550–1555), it was broken up by the sudden victory of Maurice, Elector of Saxony over the Emperor Charles V and his march into surrounding state of Tirol on 28 April 1552. There was no hope of reassembling the council while the very anti-Protestant Paul IV was Pope. The council was reconvened by Pope Pius IV (1559–1565) for the last time, meeting from 18 January 1562 at Santa Maria Maggiore, and continued until its final adjournment on 4 December 1563. It closed with a series of ritual acclamations honouring the reigning Pope, the Popes who had convoked the Council, the emperor and the kings who had supported it, the papal legates, the cardinals, the ambassadors present, and the bishops, followed by acclamations of acceptance of the faith of the Council and its decrees, and of anathema for all heretics. The history of the council is thus divided into three distinct periods: 1545–1549, 1551–1552 and 1562–1563. During the second period, the Protestants present asked for renewed discussion on points already defined and for bishops to be released from their oaths of allegiance to the Pope. When the last period began, all intentions of conciliating the Protestants was gone and the Jesuits had become a strong force. This last period was begun especially as an attempt to prevent the formation of a general council including Protestants, as had been demanded by some in France. The number of attending members in the three periods varied considerably. The council was small to begin with, opening with only about 30 bishops. It increased toward the close, but never reached the number of the First Council of Nicaea (which had 318 members) nor of the First Vatican Council (which numbered 744). The decrees were signed in 1563 by 255 members, the highest attendance of the whole council, including four papal legates, two cardinals, three patriarchs, twenty-five archbishops, and 168 bishops, two-thirds of whom were Italians. The Italian and Spanish prelates were vastly preponderant in power and numbers. At the passage of the most important decrees, not more than sixty prelates were present. Although most Protestants did not attend, ambassadors and theologians of Brandenburg, Württemberg, and Strasbourg attended having been granted an improved safe conduct The French monarchy boycotted the entire council until the last minute when a delegation led by Charles de Guise, Cardinal of Lorraine finally arrived in November 1562. The first outbreak of the French Wars of Religion had occurred earlier in the year and the French Church, facing a significant and powerful Protestant minority in France, experienced iconoclasm violence regarding the use of sacred images. Such concerns were not primary in the Italian and Spanish Churches. The last minute inclusion of a decree on sacred images was a French initiative, and the text, never discussed on the floor of the council or referred to council theologians, was based on a French draft. The main objectives of the council were twofold, although there were other issues that were also discussed: The doctrinal decisions of the council are set forth in decrees ("decreta"), which are divided into chapters ("capita"), which contain the positive statement of the conciliar dogmas, and into short canons ("canones"), which condemn the dissenting Protestant views with the concluding "anathema sit" ("let him be anathema"). The doctrinal acts are as follows: after reaffirming the Niceno-Constantinopolitan Creed (third session), the decree was passed (fourth session) confirming that the deuterocanonical books were on a par with the other books of the canon (against Luther's placement of these books in the Apocrypha of his edition) and coordinating church tradition with the Scriptures as a rule of faith. The Vulgate translation was affirmed to be authoritative for the text of Scripture. Justification (sixth session) was declared to be offered upon the basis of human cooperation with divine grace as opposed to the Protestant doctrine of passive reception of grace. Understanding the Protestant "faith alone" doctrine to be one of simple human confidence in divine mercy, the Council rejected the "vain confidence" of the Protestants, stating that no one can know who has received the grace of God. Furthermore, the Council affirmed—against some Protestants—that the grace of God can be forfeited through mortal sin. The greatest weight in the Council's decrees is given to the sacraments. The seven sacraments were reaffirmed and the Eucharist pronounced to be a true propitiatory sacrifice as well as a sacrament, in which the bread and wine were consecrated into the Eucharist (thirteenth and twenty-second sessions). The term transubstantiation was used by the Council, but the specific Aristotelian explanation given by Scholasticism was not cited as dogmatic. Instead, the decree states that Christ is "really, truly, substantially present" in the consecrated forms. The sacrifice of the Mass was to be offered for dead and living alike and in giving to the apostles the command "do this in remembrance of me," Christ conferred upon them a sacerdotal power. The practice of withholding the cup from the laity was confirmed (twenty-first session) as one which the Church Fathers had commanded for good and sufficient reasons; yet in certain cases the Pope was made the supreme arbiter as to whether the rule should be strictly maintained. On the language of the Mass, "contrary to what is often said", the council condemned the belief that only vernacular languages should be used, while insisting on the use of Latin. Ordination (twenty-third session) was defined to imprint an indelible character on the soul. The priesthood of the New Testament takes the place of the Levitical priesthood. To the performance of its functions, the consent of the people is not necessary. In the decrees on marriage (twenty-fourth session) the excellence of the celibate state was reaffirmed, concubinage condemned and the validity of marriage made dependent upon the wedding taking place before a priest and two witnesses, although the lack of a requirement for parental consent ended a debate that had proceeded from the 12th century. In the case of a divorce, the right of the innocent party to marry again was denied so long as the other party was alive, even if the other party had committed adultery. However the council "refused … to assert the necessity or usefulness of clerical celibacy". In the twenty-fifth and last session, the doctrines of purgatory, the invocation of saints and the veneration of relics were reaffirmed, as was also the efficacy of indulgences as dispensed by the Church according to the power given her, but with some cautionary recommendations, and a ban on the sale of indulgences. Short and rather inexplicit passages concerning religious images, were to have great impact on the development of Catholic Church art. Much more than the Second Council of Nicaea (787) the Council fathers of Trent stressed the pedagogical purpose of Christian images. The council appointed, in 1562 (eighteenth session), a commission to prepare a list of forbidden books ("Index Librorum Prohibitorum"), but it later left the matter to the Pope. The preparation of a catechism and the revision of the Breviary and Missal were also left to the pope. The catechism embodied the council's far-reaching results, including reforms and definitions of the sacraments, the Scriptures, church dogma, and duties of the clergy. On adjourning, the Council asked the supreme pontiff to ratify all its decrees and definitions. This petition was complied with by Pope Pius IV, on 26 January 1564, in the papal bull, "Benedictus Deus", which enjoins strict obedience upon all Catholics and forbids, under pain of excommunication, all unauthorised interpretation, reserving this to the Pope alone and threatens the disobedient with "the indignation of Almighty God and of his blessed apostles, Peter and Paul." Pope Pius appointed a commission of cardinals to assist him in interpreting and enforcing the decrees. The "Index librorum prohibitorum" was announced in 1564 and the following books were issued with the papal imprimatur: the Profession of the Tridentine Faith and the Tridentine Catechism (1566), the Breviary (1568), the Missal (1570) and the Vulgate (1590 and then 1592). The decrees of the council were acknowledged in Italy, Portugal, Poland and by the Catholic princes of Germany at the Diet of Augsburg in 1566. Philip II of Spain accepted them for Spain, the Netherlands and Sicily inasmuch as they did not infringe the royal prerogative. In France they were officially recognised by the king only in their doctrinal parts. Although the disciplinary or moral reformatory decrees were never published by the throne, they received official recognition at provincial synods and were enforced by the bishops. Holy Roman Emperors Ferdinand I and Maximilian II never recognized the existence of any of the decrees. No attempt was made to introduce it into England. Pius IV sent the decrees to Mary, Queen of Scots, with a letter dated 13 June 1564, requesting her to publish them in Scotland, but she dared not do it in the face of John Knox and the Reformation. These decrees were later supplemented by the First Vatican Council of 1870. A comprehensive history is found in Hubert Jedin's "The History of the Council of Trent (Geschichte des Konzils von Trient)" with about 2500 pages in four volumes: "The History of the Council of Trent: The fight for a Council" (Vol I, 1951); "The History of the Council of Trent: The first Sessions in Trent (1545–1547)" (Vol II, 1957); "The History of the Council of Trent: Sessions in Bologna 1547–1548 and Trento 1551–1552" (Vol III, 1970, 1998); "The History of the Council of Trent: Third Period and Conclusion" (Vol IV, 1976). The canons and decrees of the council have been published very often and in many languages. The first issue was by Paulus Manutius (Rome, 1564). Commonly-used Latin editions are by Judocus Le Plat (Antwerp, 1779) and by Johann Friedrich von Schulte and Aemilius Ludwig Richter (Leipzig, 1853). Other editions are in vol. vii. of the "Acta et decreta conciliorum recentiorum. Collectio Lacensis" (7 vols., Freiburg, 1870–90), reissued as independent volume (1892); "Concilium Tridentinum: Diariorum, actorum, epistularum, … collectio", ed. Sebastianus Merkle (4 vols., Freiburg, 1901 sqq.); as well as Mansi, "Concilia", xxxv. 345 sqq. Note also Carl Mirbt, "Quellen", 2d ed, pp. 202–255. An English edition is by James Waterworth (London, 1848; "With Essays on the External and Internal History of the Council"). The original acts and debates of the council, as prepared by its general secretary, Bishop Angelo Massarelli, in six large folio volumes, are deposited in the Vatican Library and remained there unpublished for more than 300 years and were brought to light, though only in part, by Augustin Theiner, priest of the oratory (d. 1874), in "Acta genuina sancti et oecumenici Concilii Tridentini nunc primum integre edita" (2 vols., Leipzig, 1874). Most of the official documents and private reports, however, which bear upon the council, were made known in the 16th century and since. The most complete collection of them is that of J. Le Plat, "Monumentorum ad historicam Concilii Tridentini collectio" (7 vols., Leuven, 1781–87). New materials(Vienna, 1872); by JJI von Döllinger "(Ungedruckte Berichte und Tagebücher zur Geschichte des Concilii von Trient)" (2 parts, Nördlingen, 1876); and August von Druffel, "Monumenta Tridentina" (Munich, 1884–97). Out of 87 books written between 1546 and 1564 attacking the Council of Trent, 41 were written by Pier Paolo Vergerio, a former papal nuncio turned Protestant Reformer. The 1565–73 "Examen decretorum Concilii Tridentini" ("Examination of the Council of Trent") by Martin Chemnitz was the main Lutheran response to the Council of Trent. Making extensive use of scripture and patristic sources, it was presented in response to a polemical writing which Diogo de Payva de Andrada had directed against Chemnitz. The "Examen" had four parts: Volume I examined sacred scripture, free will, original sin, justification, and good works. Volume II examined the sacraments, including baptism, confirmation, the sacrament of the eucharist, communion under both kinds, the mass, penance, extreme unction, holy orders, and matrimony. Volume III examined virginity, celibacy, purgatory, and the invocation of saints. Volume IV examined the relics of the saints, images, indulgences, fasting, the distinction of foods, and festivals. In response, Andrada wrote the five-part "Defensio Tridentinæ fidei", which was published posthumously in 1578. However, the "Defensio" did not circulate as extensively as the "Examen", nor were any full translations ever published. A French translation of the "Examen" by Eduard Preuss was published in 1861. German translations were published in 1861, 1884, and 1972. In English, a complete translation by Fred Kramer drawing from the original Latin and the 1861 German was published beginning in 1971.
https://en.wikipedia.org/wiki?curid=6354
Chloroplast Chloroplasts are organelles that conduct photosynthesis, where the photosynthetic pigment chlorophyll captures the energy from sunlight, converts it, and stores it in the energy-storage molecules ATP and NADPH while freeing oxygen from water in plant and algal cells. They then use the ATP and NADPH to make organic molecules from carbon dioxide in a process known as the Calvin cycle. Chloroplasts carry out a number of other functions, including fatty acid synthesis, much amino acid synthesis, and the immune response in plants. The number of chloroplasts per cell varies from one, in unicellular algae, up to 100 in plants like "Arabidopsis" and wheat. A chloroplast is a type of organelle known as a plastid, characterized by its two membranes and a high concentration of chlorophyll. Other plastid types, such as the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis. Chloroplasts are highly dynamic—they circulate and are moved around within plant cells, and occasionally pinch in two to reproduce. Their behavior is strongly influenced by environmental factors like light color and intensity. Chloroplasts, like mitochondria, contain their own DNA, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell. Chloroplasts cannot be made by the plant cell and must be inherited by each daughter cell during cell division. With one exception (the amoeboid "Paulinella chromatophora"), all chloroplasts can probably be traced back to a single endosymbiotic event, when a cyanobacterium was engulfed by the eukaryote. Despite this, chloroplasts can be found in an extremely wide set of organisms, some not even directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events. The word "chloroplast" is derived from the Greek words "chloros" (χλωρός), which means green, and "plastes" (πλάστης), which means "the one who forms". The first definitive description of a chloroplast ("Chlorophyllkörnen", "grain of chlorophyll") was given by Hugo von Mohl in 1837 as discrete bodies within the green plant cell. In 1883, A. F. W. Schimper would name these bodies as "chloroplastids" ("Chloroplastiden"). In 1884, Eduard Strasburger adopted the term "chloroplasts" ("Chloroplasten"). Chloroplasts are one of many types of organelles in the plant cell. They are considered to have evolved from endosymbiotic cyanobacteria. Mitochondria are thought to have come from a similar endosymbiosis event, where an aerobic prokaryote was engulfed. This origin of chloroplasts was first suggested by the Russian biologist Konstantin Mereschkowski in 1905 after Andreas Schimper observed in 1883 that chloroplasts closely resemble cyanobacteria. Chloroplasts are only found in plants, algae, and the amoeboid "Paulinella chromatophora". Chloroplasts are considered endosymbiotic Cyanobacteria. Cyanobacteria are sometimes called blue-green algae even though they are prokaryotes. They are a diverse phylum of bacteria capable of carrying out photosynthesis, and are gram-negative, meaning that they have two cell membranes. Cyanobacteria also contain a peptidoglycan cell wall, which is thicker than in other gram-negative bacteria, and which is located between their two cell membranes. Like chloroplasts, they have thylakoids within. On the thylakoid membranes are photosynthetic pigments, including chlorophyll "a". Phycobilins are also common cyanobacterial pigments, usually organized into hemispherical phycobilisomes attached to the outside of the thylakoid membranes (phycobilins are not shared with all chloroplasts though). Somewhere around 1 to 2 billion years ago, a free-living cyanobacterium entered an early eukaryotic cell, either as food or as an internal parasite, but managed to escape the phagocytic vacuole it was contained in. The two innermost lipid-bilayer membranes that surround all chloroplasts correspond to the outer and inner membranes of the ancestral cyanobacterium's gram negative cell wall, and not the phagosomal membrane from the host, which was probably lost. The new cellular resident quickly became an advantage, providing food for the eukaryotic host, which allowed it to live within it. Over time, the cyanobacterium was assimilated, and many of its genes were lost or transferred to the nucleus of the host. From genomes that probably originally contained over 3000 genes only about 130 genes remain in the chloroplasts of contemporary plants. Some of its proteins were then synthesized in the cytoplasm of the host cell, and imported back into the chloroplast (formerly the cyanobacterium). Separately, somewhere about 90–140 million years ago, it happened again and led to the amoeboid "Paulinella chromatophora". This event is called "endosymbiosis", or "cell living inside another cell with a mutual benefit for both". The external cell is commonly referred to as the "host" while the internal cell is called the "endosymbiont". Chloroplasts are believed to have arisen after mitochondria, since all eukaryotes contain mitochondria, but not all have chloroplasts. This is called "serial endosymbiosis"—an early eukaryote engulfing the mitochondrion ancestor, and some descendants of it then engulfing the chloroplast ancestor, creating a cell with both chloroplasts and mitochondria. Whether or not primary chloroplasts came from a single endosymbiotic event, or many independent engulfments across various eukaryotic lineages, has long been debated. It is now generally held that organisms with primary chloroplasts share a single ancestor that took in a cyanobacterium 600–2000 million years ago. It has been proposed this the closest living relative of this bacterium is "Gloeomargarita lithophora." The exception is the amoeboid "Paulinella chromatophora", which descends from an ancestor that took in a "Prochlorococcus" cyanobacterium 90–500 million years ago. These chloroplasts, which can be traced back directly to a cyanobacterial ancestor, are known as "primary plastids" (""plastid"" in this context means almost the same thing as chloroplast). All primary chloroplasts belong to one of four chloroplast lineages—the glaucophyte chloroplast lineage, the amoeboid "Paulinella chromatophora" lineage, the rhodophyte (red algal) chloroplast lineage, or the chloroplastidan (green) chloroplast lineage. The rhodophyte and chloroplastidan lineages are the largest, with chloroplastidan (green) being the one that contains the land plants. Usually the endosymbiosis event is considered to have occurred in the Archaeplastida, within which the glaucophyta being the possible earliest diverging lineage. The glaucophyte chloroplast group is the smallest of the three primary chloroplast lineages, being found in only 13 species, and is thought to be the one that branched off the earliest. Glaucophytes have chloroplasts that retain a peptidoglycan wall between their double membranes, like their cyanobacterial parent. For this reason, glaucophyte chloroplasts are also known as 'muroplasts' (besides 'cyanoplasts' or 'cyanelles'). Glaucophyte chloroplasts also contain concentric unstacked thylakoids, which surround a carboxysome – an icosahedral structure that glaucophyte chloroplasts and cyanobacteria keep their carbon fixation enzyme RuBisCO in. The starch that they synthesize collects outside the chloroplast. Like cyanobacteria, glaucophyte and rhodophyte chloroplast thylakoids are studded with light collecting structures called phycobilisomes. For these reasons, glaucophyte chloroplasts are considered a primitive intermediate between cyanobacteria and the more evolved chloroplasts in red algae and plants. The rhodophyte, or red algae chloroplast group is another large and diverse chloroplast lineage. Rhodophyte chloroplasts are also called "rhodoplasts", literally "red chloroplasts". Rhodoplasts have a double membrane with an intermembrane space and phycobilin pigments organized into phycobilisomes on the thylakoid membranes, preventing their thylakoids from stacking. Some contain pyrenoids. Rhodoplasts have chlorophyll "a" and phycobilins for photosynthetic pigments; the phycobilin phycoerythrin is responsible for giving many red algae their distinctive red color. However, since they also contain the blue-green chlorophyll "a" and other pigments, many are reddish to purple from the combination. The red phycoerytherin pigment is an adaptation to help red algae catch more sunlight in deep water—as such, some red algae that live in shallow water have less phycoerythrin in their rhodoplasts, and can appear more greenish. Rhodoplasts synthesize a form of starch called floridean starch, which collects into granules outside the rhodoplast, in the cytoplasm of the red alga. The chloroplastida chloroplasts, or green chloroplasts, are another large, highly diverse primary chloroplast lineage. Their host organisms are commonly known as the green algae and land plants. They differ from glaucophyte and red algal chloroplasts in that they have lost their phycobilisomes, and contain chlorophyll "b" instead. Most green chloroplasts are (obviously) green, though some aren't, like some forms of "Hæmatococcus pluvialis", due to accessory pigments that override the chlorophylls' green colors. Chloroplastida chloroplasts have lost the peptidoglycan wall between their double membrane, leaving an intermembrane space. Some plants seem to have kept the genes for the synthesis of the peptidoglycan layer, though they've been repurposed for use in chloroplast division instead. Most of the chloroplasts depicted in this article are green chloroplasts. Green algae and plants keep their starch "inside" their chloroplasts, and in plants and some algae, the chloroplast thylakoids are arranged in grana stacks. Some green algal chloroplasts contain a structure called a pyrenoid, which is functionally similar to the glaucophyte carboxysome in that it is where RuBisCO and CO are concentrated in the chloroplast. "Helicosporidium" is a genus of nonphotosynthetic parasitic green algae that is thought to contain a vestigial chloroplast. Genes from a chloroplast and nuclear genes indicating the presence of a chloroplast have been found in "Helicosporidium" even if nobody's seen the chloroplast itself. While most chloroplasts originate from that first set of endosymbiotic events, "Paulinella chromatophora" is an exception that acquired a photosynthetic cyanobacterial endosymbiont more recently. It is not clear whether that symbiont is closely related to the ancestral chloroplast of other eukaryotes. Being in the early stages of endosymbiosis, "Paulinella chromatophora" can offer some insights into how chloroplasts evolved. "Paulinella" cells contain one or two sausage shaped blue-green photosynthesizing structures called chromatophores, descended from the cyanobacterium "Synechococcus". Chromatophores cannot survive outside their host. Chromatophore DNA is about a million base pairs long, containing around 850 protein encoding genes—far less than the three million base pair "Synechococcus" genome, but much larger than the approximately 150,000 base pair genome of the more assimilated chloroplast. Chromatophores have transferred much less of their DNA to the nucleus of their host. About 0.3–0.8% of the nuclear DNA in "Paulinella" is from the chromatophore, compared with 11–14% from the chloroplast in plants. Many other organisms obtained chloroplasts from the primary chloroplast lineages through secondary endosymbiosis—engulfing a red or green alga that contained a chloroplast. These chloroplasts are known as secondary plastids. While primary chloroplasts have a double membrane from their cyanobacterial ancestor, secondary chloroplasts have additional membranes outside of the original two, as a result of the secondary endosymbiotic event, when a nonphotosynthetic eukaryote engulfed a chloroplast-containing alga but failed to digest it—much like the cyanobacterium at the beginning of this story. The engulfed alga was broken down, leaving only its chloroplast, and sometimes its cell membrane and nucleus, forming a chloroplast with three or four membranes—the two cyanobacterial membranes, sometimes the eaten alga's cell membrane, and the phagosomal vacuole from the host's cell membrane. The genes in the phagocytosed eukaryote's nucleus are often transferred to the secondary host's nucleus. Cryptomonads and chlorarachniophytes retain the phagocytosed eukaryote's nucleus, an object called a nucleomorph, located between the second and third membranes of the chloroplast. All secondary chloroplasts come from green and red algae—no secondary chloroplasts from glaucophytes have been observed, probably because glaucophytes are relatively rare in nature, making them less likely to have been taken up by another eukaryote. Green algae have been taken up by the euglenids, chlorarachniophytes, a lineage of dinoflagellates, and possibly the ancestor of the CASH lineage (cryptomonads, alveolates, stramenopiles and haptophytes) in three or four separate engulfments. Many green algal derived chloroplasts contain pyrenoids, but unlike chloroplasts in their green algal ancestors, storage product collects in granules outside the chloroplast. Euglenophytes are a group of common flagellated protists that contain chloroplasts derived from a green alga. Euglenophyte chloroplasts have three membranes—it is thought that the membrane of the primary endosymbiont was lost, leaving the cyanobacterial membranes, and the secondary host's phagosomal membrane. Euglenophyte chloroplasts have a pyrenoid and thylakoids stacked in groups of three. Photosynthetic product is stored in the form of paramylon, which is contained in membrane-bound granules in the cytoplasm of the euglenophyte. Chlorarachniophytes are a rare group of organisms that also contain chloroplasts derived from green algae, though their story is more complicated than that of the euglenophytes. The ancestor of chlorarachniophytes is thought to have been a eukaryote with a "red" algal derived chloroplast. It is then thought to have lost its first red algal chloroplast, and later engulfed a green alga, giving it its second, green algal derived chloroplast. Chlorarachniophyte chloroplasts are bounded by four membranes, except near the cell membrane, where the chloroplast membranes fuse into a double membrane. Their thylakoids are arranged in loose stacks of three. Chlorarachniophytes have a form of polysaccharide called chrysolaminarin, which they store in the cytoplasm, often collected around the chloroplast pyrenoid, which bulges into the cytoplasm. Chlorarachniophyte chloroplasts are notable because the green alga they are derived from has not been completely broken down—its nucleus still persists as a nucleomorph found between the second and third chloroplast membranes—the periplastid space, which corresponds to the green alga's cytoplasm. "Lepidodinium viride" and its close relatives are dinophytes (see below) that lost their original peridinin chloroplast and replaced it with a green algal derived chloroplast (more specifically, a prasinophyte). "Lepidodinium" is the only dinophyte that has a chloroplast that's not from the rhodoplast lineage. The chloroplast is surrounded by two membranes and has no nucleomorph—all the nucleomorph genes have been transferred to the dinophyte nucleus. The endosymbiotic event that led to this chloroplast was serial secondary endosymbiosis rather than tertiary endosymbiosis—the endosymbiont was a green alga containing a primary chloroplast (making a secondary chloroplast). Cryptophytes, or cryptomonads are a group of algae that contain a red-algal derived chloroplast. Cryptophyte chloroplasts contain a nucleomorph that superficially resembles that of the chlorarachniophytes. Cryptophyte chloroplasts have four membranes, the outermost of which is continuous with the rough endoplasmic reticulum. They synthesize ordinary starch, which is stored in granules found in the periplastid space—outside the original double membrane, in the place that corresponds to the red alga's cytoplasm. Inside cryptophyte chloroplasts is a pyrenoid and thylakoids in stacks of two. Their chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in their thylakoid space, rather than anchored on the outside of their thylakoid membranes. Cryptophytes may have played a key role in the spreading of red algal based chloroplasts. Haptophytes are similar and closely related to cryptophytes or heterokontophytes. Their chloroplasts lack a nucleomorph, their thylakoids are in stacks of three, and they synthesize chrysolaminarin sugar, which they store completely outside of the chloroplast, in the cytoplasm of the haptophyte. The heterokontophytes, also known as the stramenopiles, are a very large and diverse group of eukaryotes. The photoautotrophic lineage, Ochrophyta, including the diatoms and the brown algae, golden algae, and yellow-green algae, also contains red algal derived chloroplasts. Heterokont chloroplasts are very similar to haptophyte chloroplasts, containing a pyrenoid, triplet thylakoids, and with some exceptions, having four layer plastidic envelope, the outermost epiplastid membrane connected to the endoplasmic reticulum. Like haptophytes, heterokontophytes store sugar in chrysolaminarin granules in the cytoplasm. Heterokontophyte chloroplasts contain chlorophyll "a" and with a few exceptions chlorophyll "c", but also have carotenoids which give them their many colors. The alveolates are a major clade of unicellular eukaryotes of both autotrophic and heterotrophic members. The most notable shared characteristic is the presence of cortical (outer-region) alveoli (sacs). These are flattened vesicles (sacs) packed into a continuous layer just under the membrane and supporting it, typically forming a flexible pellicle (thin skin). In dinoflagellates they often form armor plates. Many members contain a red-algal derived plastid. One notable characteristic of this diverse group is the frequent loss of photosynthesis. However, a majority of these heterotrophs continue to process a non-photosynthetic plastid. Apicomplexans are a group of alveolates. Like the helicosproidia, they're parasitic, and have a nonphotosynthetic chloroplast. They were once thought to be related to the helicosproidia, but it is now known that the helicosproida are green algae rather than part of the CASH lineage. The apicomplexans include "Plasmodium", the malaria parasite. Many apicomplexans keep a vestigial red algal derived chloroplast called an apicoplast, which they inherited from their ancestors. Other apicomplexans like "Cryptosporidium" have lost the chloroplast completely. Apicomplexans store their energy in amylopectin granules that are located in their cytoplasm, even though they are nonphotosynthetic. Apicoplasts have lost all photosynthetic function, and contain no photosynthetic pigments or true thylakoids. They are bounded by four membranes, but the membranes are not connected to the endoplasmic reticulum. The fact that apicomplexans still keep their nonphotosynthetic chloroplast around demonstrates how the chloroplast carries out important functions other than photosynthesis. Plant chloroplasts provide plant cells with many important things besides sugar, and apicoplasts are no different—they synthesize fatty acids, isopentenyl pyrophosphate, iron-sulfur clusters, and carry out part of the heme pathway. This makes the apicoplast an attractive target for drugs to cure apicomplexan-related diseases. The most important apicoplast function is isopentenyl pyrophosphate synthesis—in fact, apicomplexans die when something interferes with this apicoplast function, and when apicomplexans are grown in an isopentenyl pyrophosphate-rich medium, they dump the organelle. The Chromerida is a newly discovered group of algae from Australian corals which comprises some close photosynthetic relatives of the apicomplexans. The first member, "Chromera velia", was discovered and first isolated in 2001. The discovery of "Chromera velia" with similar structure to the apicomplexanss, provides an important link in the evolutionary history of the apicomplexans and dinophytes. Their plastids have four membranes, lack chlorophyll c and use the type II form of RuBisCO obtained from a horizontal transfer event. The dinoflagellates are yet another very large and diverse group of protists, around half of which are (at least partially) photosynthetic. Most dinophyte chloroplasts are secondary red algal derived chloroplasts. Many other dinophytes have lost the chloroplast (becoming the nonphotosynthetic kind of dinoflagellate), or replaced it though "tertiary" endosymbiosis—the engulfment of another eukaryotic algae containing a red algal derived chloroplast. Others replaced their original chloroplast with a green algal derived one. Most dinophyte chloroplasts contain form II RuBisCO, at least the photosynthetic pigments chlorophyll "a", chlorophyll "c2", "beta"-carotene, and at least one dinophyte-unique xanthophyll (peridinin, dinoxanthin, or diadinoxanthin), giving many a golden-brown color. All dinophytes store starch in their cytoplasm, and most have chloroplasts with thylakoids arranged in stacks of three. The most common dinophyte chloroplast is the peridinin-type chloroplast, characterized by the carotenoid pigment peridinin in their chloroplasts, along with chlorophyll "a" and chlorophyll "c"2. Peridinin is not found in any other group of chloroplasts. The peridinin chloroplast is bounded by three membranes (occasionally two), having lost the red algal endosymbiont's original cell membrane. The outermost membrane is not connected to the endoplasmic reticulum. They contain a pyrenoid, and have triplet-stacked thylakoids. Starch is found outside the chloroplast. An important feature of these chloroplasts is that their chloroplast DNA is highly reduced and fragmented into many small circles. Most of the genome has migrated to the nucleus, and only critical photosynthesis-related genes remain in the chloroplast. The peridinin chloroplast is thought to be the dinophytes' "original" chloroplast, which has been lost, reduced, replaced, or has company in several other dinophyte lineages. The fucoxanthin dinophyte lineages (including "Karlodinium" and "Karenia") lost their original red algal derived chloroplast, and replaced it with a new chloroplast derived from a haptophyte endosymbiont. "Karlodinium" and "Karenia" probably took up different heterokontophytes. Because the haptophyte chloroplast has four membranes, tertiary endosymbiosis would be expected to create a six membraned chloroplast, adding the haptophyte's cell membrane and the dinophyte's phagosomal vacuole. However, the haptophyte was heavily reduced, stripped of a few membranes and its nucleus, leaving only its chloroplast (with its original double membrane), and possibly one or two additional membranes around it. Fucoxanthin-containing chloroplasts are characterized by having the pigment fucoxanthin (actually 19′-hexanoyloxy-fucoxanthin and/or 19′-butanoyloxy-fucoxanthin) and no peridinin. Fucoxanthin is also found in haptophyte chloroplasts, providing evidence of ancestry. Some dinophytes, like "Kryptoperidinium" and "Durinskia" have a diatom (heterokontophyte) derived chloroplast. These chloroplasts are bounded by up to "five" membranes, (depending on whether the entire diatom endosymbiont is counted as the chloroplast, or just the red algal derived chloroplast inside it). The diatom endosymbiont has been reduced relatively little—it still retains its original mitochondria, and has endoplasmic reticulum, ribosomes, a nucleus, and of course, red algal derived chloroplasts—practically a complete cell, all inside the host's endoplasmic reticulum lumen. However the diatom endosymbiont can't store its own food—its storage polysaccharide is found in granules in the dinophyte host's cytoplasm instead. The diatom endosymbiont's nucleus is present, but it probably can't be called a nucleomorph because it shows no sign of genome reduction, and might have even been "expanded". Diatoms have been engulfed by dinoflagellates at least three times. The diatom endosymbiont is bounded by a single membrane, inside it are chloroplasts with four membranes. Like the diatom endosymbiont's diatom ancestor, the chloroplasts have triplet thylakoids and pyrenoids. In some of these genera, the diatom endosymbiont's chloroplasts aren't the only chloroplasts in the dinophyte. The original three-membraned peridinin chloroplast is still around, converted to an eyespot. In some groups of mixotrophic protists, like some dinoflagellates (e.g. "Dinophysis"), chloroplasts are separated from a captured alga and used temporarily. These klepto chloroplasts may only have a lifetime of a few days and are then replaced. Members of the genus "Dinophysis" have a phycobilin-containing chloroplast taken from a cryptophyte. However, the cryptophyte is not an endosymbiont—only the chloroplast seems to have been taken, and the chloroplast has been stripped of its nucleomorph and outermost two membranes, leaving just a two-membraned chloroplast. Cryptophyte chloroplasts require their nucleomorph to maintain themselves, and "Dinophysis" species grown in cell culture alone cannot survive, so it is possible (but not confirmed) that the "Dinophysis" chloroplast is a kleptoplast—if so, "Dinophysis" chloroplasts wear out and "Dinophysis" species must continually engulf cryptophytes to obtain new chloroplasts to replace the old ones. Chloroplasts have their own DNA, often abbreviated as ctDNA, or cpDNA. It is also known as the plastome. Its existence was first proved in 1962, and first sequenced in 1986—when two Japanese research teams sequenced the chloroplast DNA of liverwort and tobacco. Since then, hundreds of chloroplast DNAs from various species have been sequenced, but they are mostly those of land plants and green algae—glaucophytes, red algae, and other algal groups are extremely underrepresented, potentially introducing some bias in views of "typical" chloroplast DNA structure and content. With few exceptions, most chloroplasts have their entire chloroplast genome combined into a single large circular DNA molecule, typically 120,000–170,000 base pairs long. They can have a contour length of around 30–60 micrometers, and have a mass of about 80–130 million daltons. While usually thought of as a circular molecule, there is some evidence that chloroplast DNA molecules more often take on a linear shape. Many chloroplast DNAs contain two "inverted repeats", which separate a long single copy section (LSC) from a short single copy section (SSC). While a given pair of inverted repeats are rarely completely identical, they are always very similar to each other, apparently resulting from concerted evolution. The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each and containing as few as four or as many as over 150 genes. Inverted repeats in plants tend to be at the upper end of this range, each being 20,000–25,000 base pairs long. The inverted repeat regions are highly conserved among land plants, and accumulate few mutations. Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceae), suggesting that they predate the chloroplast, though some chloroplast DNAs have since lost or flipped the inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast DNAs which have lost some of the inverted repeat segments tend to get rearranged more. New chloroplasts may contain up to 100 copies of their DNA, though the number of chloroplast DNA copies decreases to about 15–20 as the chloroplasts age. They are usually packed into nucleoids, which can contain several identical chloroplast DNA rings. Many nucleoids can be found in each chloroplast. In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of the chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma. Though chloroplast DNA is not associated with true histones, in red algae, similar proteins that tightly pack each chloroplast DNA ring into a nucleoid have been found. In chloroplasts of the moss "Physcomitrella patens", the DNA mismatch repair protein Msh1 interacts with the recombinational repair proteins RecA and RecG to maintain chloroplast genome stability. In chloroplasts of the plant "Arabidopsis thaliana" the RecA protein maintains the integrity of the chloroplast's DNA by a process that likely involves the recombinational repair of DNA damage. The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Transcription starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to transcribe the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes. In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine. Hypoxanthine can bind to cytosine, and when the XC base pair is replicated, it becomes a GC (thus, an A → G base change). In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures. One of competing model for cpDNA replication asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to the linear and circular DNA structures of bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more species still contain complex structures that scientists do not yet understand. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not expand the multiple A --> G gradients seen in plastomes. Because of the failure to explain the deamination gradient as well as the numerous plant species that have been shown to have circular cpDNA, the predominant theory continues to hold that most cpDNA is circular and most likely replicates via a D loop mechanism. The chloroplast genome most commonly includes around 100 genes that code for a variety of things, mostly to do with the protein pipeline and photosynthesis. As in prokaryotes, genes in chloroplast DNA are organized into operons. Unlike prokaryotic DNA molecules, chloroplast DNA molecules contain introns (plant mitochondrial DNAs do too, but not human mtDNAs). Among land plants, the contents of the chloroplast genome are fairly similar. Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called "endosymbiotic gene transfer". As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. Recently, a plastid without a genome was found, demonstrating chloroplasts can lose their genome during endosymbiotic the gene transfer process. Endosymbiotic gene transfer is how we know about the lost chloroplasts in many CASH lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast. In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in "Arabidopsis", corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants. Of the approximately 3000 proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called "retrograde signaling". Protein synthesis within chloroplasts relies on two RNA polymerases. One is coded by the chloroplast DNA, the other is of nuclear origin. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes. Because so many chloroplast genes have been moved to the nucleus, many proteins that would originally have been translated in the chloroplast are now synthesized in the cytoplasm of the plant cell. These proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes. Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway, though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell because to reach the chloroplast from the cytosol, the cell membrane must be crossed, which signifies entrance into the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway. Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle. In most, but not all cases, nuclear-encoded chloroplast proteins are translated with a "cleavable transit peptide" that's added to the N-terminus of the protein precursor. Sometimes the transit sequence is found on the C-terminus of the protein, or within the functional part of the protein. After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, an enzyme specific to chloroplast proteins phosphorylates, or adds a phosphate group to many (but not all) of them in their transit sequences. Phosphorylation helps many proteins bind the polypeptide, keeping it from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized by the chloroplast. These proteins also help the polypeptide get imported into the chloroplast. From here, chloroplast proteins bound for the stroma must pass through two protein complexes—the TOC complex, or "translocon on the outer chloroplast membrane", and the TIC translocon, or "translocon on the inner chloroplast membrane translocon". Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space. In land plants, chloroplasts are generally lens-shaped, 3–10 μm in diameter and 1–3 μm thick. Corn seedling chloroplasts are ≈20 µm3 in volume. Greater diversity in chloroplast shapes exists among the algae, which often contain a single chloroplast that can be shaped like a net (e.g., "Oedogonium"), a cup (e.g., "Chlamydomonas"), a ribbon-like spiral around the edges of the cell (e.g., "Spirogyra"), or slightly twisted bands at the cell edges (e.g., "Sirogonium"). Some algae have two chloroplasts in each cell; they are star-shaped in "Zygnema", or may follow the shape of half the cell in order Desmidiales. In some algae, the chloroplast takes up most of the cell, with pockets for the nucleus and other organelles, for example, some species of "Chlorella" have a cup-shaped chloroplast that occupies much of the cell. All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. Chloroplasts that are the product of secondary endosymbiosis may have additional membranes surrounding these three. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats. There are some common misconceptions about the outer and inner chloroplast membranes. The fact that chloroplasts are surrounded by a double membrane is often cited as evidence that they are the descendants of endosymbiotic cyanobacteria. This is often interpreted as meaning the outer chloroplast membrane is the product of the host's cell membrane infolding to form a vesicle to surround the ancestral cyanobacterium—which is not true—both chloroplast membranes are homologous to the cyanobacterium's original double membranes. The chloroplast double membrane is also often compared to the mitochondrial double membrane. This is not a valid comparison—the inner mitochondria membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. The only chloroplast structure that can considered analogous to it is the internal thylakoid system. Even so, in terms of "in-out", the direction of chloroplast H ion flow is in the opposite direction compared to oxidative phosphorylation in mitochondria. In addition, in terms of function, the inner chloroplast membrane, which regulates metabolite passage and synthesizes some materials, has no counterpart in the mitochondrion. The outer chloroplast membrane is a semi-porous membrane that small molecules and ions can easily diffuse across. However, it is not permeable to larger proteins, so chloroplast polypeptides being synthesized in the cell cytoplasm must be transported across the outer chloroplast membrane by the TOC complex, or "translocon on the outer chloroplast" membrane. The chloroplast membranes sometimes protrude out into the cytoplasm, forming a stromule, or stroma-containing tubule. Stromules are very rare in chloroplasts, and are much more common in other plastids like chromoplasts and amyloplasts in petals and roots, respectively. They may exist to increase the chloroplast's surface area for cross-membrane transport, because they are often branched and tangled with the endoplasmic reticulum. When they were first observed in 1962, some plant biologists dismissed the structures as artifactual, claiming that stromules were just oddly shaped chloroplasts with constricted regions or dividing chloroplasts. However, there is a growing body of evidence that stromules are functional, integral features of plant cell plastids, not merely artifacts. Usually, a thin intermembrane space about 10–20 nanometers thick exists between the outer and inner chloroplast membranes. Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called "muroplasts" (from Latin ""mura"", meaning "wall"). Other chloroplasts have lost the cyanobacterial wall, leaving an intermembrane space between the two chloroplast envelope membranes. The inner chloroplast membrane borders the stroma and regulates passage of materials in and out of the chloroplast. After passing through the TOC complex in the outer chloroplast membrane, polypeptides must pass through the TIC complex "(translocon on the inner chloroplast membrane)" which is located in the inner chloroplast membrane. In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized. Some chloroplasts contain a structure called the chloroplast peripheral reticulum. It is often found in the chloroplasts of plants, though it has also been found in some angiosperms, and even some gymnosperms. The chloroplast peripheral reticulum consists of a maze of membranous tubes and vesicles continuous with the inner chloroplast membrane that extends into the internal stromal fluid of the chloroplast. Its purpose is thought to be to increase the chloroplast's surface area for cross-membrane transport between its stroma and the cell cytoplasm. The small vesicles sometimes observed may serve as transport vesicles to shuttle stuff between the thylakoids and intermembrane space. The protein-rich, alkaline, aqueous fluid within the inner chloroplast membrane and outside of the thylakoid space is called the stroma, which corresponds to the cytosol of the original cyanobacterium. Nucleoids of chloroplast DNA, chloroplast ribosomes, the thylakoid system with plastoglobuli, starch granules, and many proteins can be found floating around in it. The Calvin cycle, which fixes CO into G3P takes place in the stroma. Chloroplasts have their own ribosomes, which they use to synthesize a small fraction of their proteins. Chloroplast ribosomes are about two-thirds the size of cytoplasmic ribosomes (around 17 nm vs 25 nm). They take mRNAs transcribed from the chloroplast DNA and translate them into protein. While similar to bacterial ribosomes, chloroplast translation is more complex than in bacteria, so chloroplast ribosomes include some chloroplast-unique features. Small subunit ribosomal RNAs in several Chlorophyta and euglenid chloroplasts lack motifs for shine-dalgarno sequence recognition, which is considered essential for translation initiation in most chloroplasts and prokaryotes. Such loss is also rarely observed in other plastids and prokaryotes. Plastoglobuli (singular "plastoglobulus", sometimes spelled "plastoglobule(s)"), are spherical bubbles of lipids and proteins about 45–60 nanometers across. They are surrounded by a lipid monolayer. Plastoglobuli are found in all chloroplasts, but become more common when the chloroplast is under oxidative stress, or when it ages and transitions into a gerontoplast. Plastoglobuli also exhibit a greater size variation under these conditions. They are also common in etioplasts, but decrease in number as the etioplasts mature into chloroplasts. Plastoglubuli contain both structural proteins and enzymes involved in lipid synthesis and metabolism. They contain many types of lipids including plastoquinone, vitamin E, carotenoids and chlorophylls. Plastoglobuli were once thought to be free-floating in the stroma, but it is now thought that they are permanently attached either to a thylakoid or to another plastoglobulus attached to a thylakoid, a configuration that allows a plastoglobulus to exchange its contents with the thylakoid network. In normal green chloroplasts, the vast majority of plastoglobuli occur singularly, attached directly to their parent thylakoid. In old or stressed chloroplasts, plastoglobuli tend to occur in linked groups or chains, still always anchored to a thylakoid. Plastoglobuli form when a bubble appears between the layers of the lipid bilayer of the thylakoid membrane, or bud from existing plastoglubuli—though they never detach and float off into the stroma. Practically all plastoglobuli form on or near the highly curved edges of the thylakoid disks or sheets. They are also more common on stromal thylakoids than on granal ones. Starch granules are very common in chloroplasts, typically taking up 15% of the organelle's volume, though in some other plastids like amyloplasts, they can be big enough to distort the shape of the organelle. Starch granules are simply accumulations of starch in the stroma, and are not bounded by a membrane. Starch granules appear and grow throughout the day, as the chloroplast synthesizes sugars, and are consumed at night to fuel respiration and continue sugar export into the phloem, though in mature chloroplasts, it is rare for a starch granule to be completely consumed or for a new granule to accumulate. Starch granules vary in composition and location across different chloroplast lineages. In red algae, starch granules are found in the cytoplasm rather than in the chloroplast. In plants, mesophyll chloroplasts, which do not synthesize sugars, lack starch granules. The chloroplast stroma contains many proteins, though the most common and important is RuBisCO, which is probably also the most abundant protein on the planet. RuBisCO is the enzyme that fixes CO into sugar molecules. In plants, RuBisCO is abundant in all chloroplasts, though in plants, it is confined to the bundle sheath chloroplasts, where the Calvin cycle is carried out in plants. The chloroplasts of some hornworts and algae contain structures called pyrenoids. They are not found in higher plants. Pyrenoids are roughly spherical and highly refractive bodies which are a site of starch accumulation in plants that contain them. They consist of a matrix opaque to electrons, surrounded by two hemispherical starch plates. The starch is accumulated as the pyrenoids mature. In algae with carbon concentrating mechanisms, the enzyme RuBisCO is found in the pyrenoids. Starch can also accumulate around the pyrenoids when CO2 is scarce. Pyrenoids can divide to form new pyrenoids, or be produced "de novo". Suspended within the chloroplast stroma is the thylakoid system, a highly dynamic collection of membranous sacks called thylakoids where chlorophyll is found and the light reactions of photosynthesis happen. In most vascular plant chloroplasts, the thylakoids are arranged in stacks called grana, though in certain plant chloroplasts and some algal chloroplasts, the thylakoids are free floating. Using a light microscope, it is just barely possible to see tiny green granules—which were named grana. With electron microscopy, it became possible to see the thylakoid system in more detail, revealing it to consist of stacks of flat thylakoids which made up the grana, and long interconnecting stromal thylakoids which linked different grana. In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick. For a long time, the three-dimensional structure of the thylakoid system has been unknown or disputed. One model has the granum as a stack of thylakoids linked by helical stromal thylakoids; the other has the granum as a single folded thylakoid connected in a "hub and spoke" way to other grana by stromal thylakoids. While the thylakoid system is still commonly depicted according to the folded thylakoid model, it was determined in 2011 that the stacked and helical thylakoids model is correct. In the helical thylakoid model, grana consist of a stack of flattened circular granal thylakoids that resemble pancakes. Each granum can contain anywhere from two to a hundred thylakoids, though grana with 10–20 thylakoids are most common. Wrapped around the grana are helicoid stromal thylakoids, also known as frets or lamellar thylakoids. The helices ascend at an angle of 20–25°, connecting to each granal thylakoid at a bridge-like slit junction. The helicoids may extend as large sheets that link multiple grana, or narrow to tube-like bridges between grana. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth. Thylakoids (sometimes spelled "thylakoïds"), are small interconnected sacks which contain the membranes that the light reactions of photosynthesis take place on. The word "thylakoid" comes from the Greek word "thylakos" which means "sack". Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. Photosystem II and photosystem I contain light-harvesting complexes with chlorophyll and carotenoids that absorb light energy and use it to energize electrons. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. ATP synthase is a large protein complex that harnesses the concentration gradient of the hydrogen ions in the thylakoid space to generate ATP energy as the hydrogen ions flow back out into the stroma—much like a dam turbine. There are two types of thylakoids—granal thylakoids, which are arranged in grana, and stromal thylakoids, which are in contact with the stroma. Granal thylakoids are pancake-shaped circular disks about 300–600 nanometers in diameter. Stromal thylakoids are helicoid sheets that spiral around grana. The flat tops and bottoms of granal thylakoids contain only the relatively flat photosystem II protein complex. This allows them to stack tightly, forming grana with many layers of tightly appressed membrane, called granal membrane, increasing stability and surface area for light capture. In contrast, photosystem I and ATP synthase are large protein complexes which jut out into the stroma. They can't fit in the appressed granal membranes, and so are found in the stromal thylakoid membrane—the edges of the granal thylakoid disks and the stromal thylakoids. These large protein complexes may act as spacers between the sheets of stromal thylakoids. The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal. Inside the photosystems embedded in chloroplast thylakoid membranes are various photosynthetic pigments, which absorb and transfer light energy. The types of pigments found are different in various groups of chloroplasts, and are responsible for a wide variety of chloroplast colorations. Paper chroma-tography of some spinach leaf extract shows the various pigments present in their chloroplasts. Xanthophylls Chlorophyll "a" Chlorophyll "b" Chlorophyll "a" is found in all chloroplasts, as well as their cyanobacterial ancestors. Chlorophyll "a" is a blue-green pigment partially responsible for giving most cyanobacteria and chloroplasts their color. Other forms of chlorophyll exist, such as the accessory pigments chlorophyll "b", chlorophyll "c", chlorophyll "d", and chlorophyll "f". Chlorophyll "b" is an olive green pigment found only in the chloroplasts of plants, green algae, any secondary chloroplasts obtained through the secondary endosymbiosis of a green alga, and a few cyanobacteria. It is the chlorophylls "a" and "b" together that make most plant and green algal chloroplasts green. Chlorophyll "c" is mainly found in secondary endosymbiotic chloroplasts that originated from a red alga, although it is not found in chloroplasts of red algae themselves. Chlorophyll "c" is also found in some green algae and cyanobacteria. Chlorophylls "d" and "f" are pigments found only in some cyanobacteria. In addition to chlorophylls, another group of yellow–orange pigments called carotenoids are also found in the photosystems. There are about thirty photosynthetic carotenoids. They help transfer and dissipate excess energy, and their bright colors sometimes override the chlorophyll green, like during the fall, when the leaves of some land plants change color. β-carotene is a bright red-orange carotenoid found in nearly all chloroplasts, like chlorophyll "a". Xanthophylls, especially the orange-red zeaxanthin, are also common. Many other forms of carotenoids exist that are only found in certain groups of chloroplasts. Phycobilins are a third group of pigments found in cyanobacteria, and glaucophyte, red algal, and cryptophyte chloroplasts. Phycobilins come in all colors, though phycoerytherin is one of the pigments that makes many red algae red. Phycobilins often organize into relatively large protein complexes about 40 nanometers across called phycobilisomes. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. Cryptophyte chloroplasts and some cyanobacteria don't have their phycobilin pigments organized into phycobilisomes, and keep them in their thylakoid space instead. To fix carbon dioxide into sugar molecules in the process of photosynthesis, chloroplasts use an enzyme called RuBisCO. RuBisCO has a problem—it has trouble distinguishing between carbon dioxide and oxygen, so at high oxygen concentrations, RuBisCO starts accidentally adding oxygen to sugar precursors. This has the end result of ATP energy being wasted and being released, all with no sugar being produced. This is a big problem, since O is produced by the initial light reactions of photosynthesis, causing issues down the line in the Calvin cycle which uses RuBisCO. plants evolved a way to solve this—by spatially separating the light reactions and the Calvin cycle. The light reactions, which store light energy in ATP and NADPH, are done in the mesophyll cells of a leaf. The Calvin cycle, which uses the stored energy to make sugar using RuBisCO, is done in the bundle sheath cells, a layer of cells surrounding a vein in a leaf. As a result, chloroplasts in mesophyll cells and bundle sheath cells are specialized for each stage of photosynthesis. In mesophyll cells, chloroplasts are specialized for the light reactions, so they lack RuBisCO, and have normal grana and thylakoids, which they use to make ATP and NADPH, as well as oxygen. They store in a four-carbon compound, which is why the process is called " photosynthesis". The four-carbon compound is then transported to the bundle sheath chloroplasts, where it drops off and returns to the mesophyll. Bundle sheath chloroplasts do not carry out the light reactions, preventing oxygen from building up in them and disrupting RuBisCO activity. Because of this, they lack thylakoids organized into grana stacks—though bundle sheath chloroplasts still have free-floating thylakoids in the stroma where they still carry out cyclic electron flow, a light-driven method of synthesizing ATP to power the Calvin cycle without generating oxygen. They lack photosystem II, and only have photosystem I—the only protein complex needed for cyclic electron flow. Because the job of bundle sheath chloroplasts is to carry out the Calvin cycle and make sugar, they often contain large starch grains. Both types of chloroplast contain large amounts of chloroplast peripheral reticulum, which they use to get more surface area to transport stuff in and out of them. Mesophyll chloroplasts have a little more peripheral reticulum than bundle sheath chloroplasts. Not all cells in a multicellular plant contain chloroplasts. All green parts of a plant contain chloroplasts—the chloroplasts, or more specifically, the chlorophyll in them are what make the photosynthetic parts of a plant green. The plant cells which contain chloroplasts are usually parenchyma cells, though chloroplasts can also be found in collenchyma tissue. A plant cell which contains chloroplasts is known as a chlorenchyma cell. A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts. In some plants such as cacti, chloroplasts are found in the stems, though in most plants, chloroplasts are concentrated in the leaves. One square millimeter of leaf tissue can contain half a million chloroplasts. Within a leaf, chloroplasts are mainly found in the mesophyll layers of a leaf, and the guard cells of stomata. Palisade mesophyll cells can contain 30–70 chloroplasts per cell, while stomatal guard cells contain only around 8–15 per cell, as well as much less chlorophyll. Chloroplasts can also be found in the bundle sheath cells of a leaf, especially in C plants, which carry out the Calvin cycle in their bundle sheath cells. They are often absent from the epidermis of a leaf. The chloroplasts of plant and algal cells can orient themselves to best suit the available light. In low-light conditions, they will spread out in a sheet—maximizing the surface area to absorb light. Under intense light, they will seek shelter by aligning in vertical columns along the plant cell's cell wall or turning sideways so that light strikes them edge-on. This reduces exposure and protects them from photooxidative damage. This ability to distribute chloroplasts so that they can take shelter behind each other or spread out may be the reason why land plants evolved to have many small chloroplasts instead of a few big ones. Chloroplast movement is considered one of the most closely regulated stimulus-response systems that can be found in plants. Mitochondria have also been observed to follow chloroplasts as they move. In higher plants, chloroplast movement is run by phototropins, blue light photoreceptors also responsible for plant phototropism. In some algae, mosses, ferns, and flowering plants, chloroplast movement is influenced by red light in addition to blue light, though very long red wavelengths inhibit movement rather than speeding it up. Blue light generally causes chloroplasts to seek shelter, while red light draws them out to maximize light absorption. Studies of "Vallisneria gigantea", an aquatic flowering plant, have shown that chloroplasts can get moving within five minutes of light exposure, though they don't initially show any net directionality. They may move along microfilament tracks, and the fact that the microfilament mesh changes shape to form a honeycomb structure surrounding the chloroplasts after they have moved suggests that microfilaments may help to anchor chloroplasts in place. Unlike most epidermal cells, the guard cells of plant stomata contain relatively well-developed chloroplasts. However, exactly what they do is controversial. Plants lack specialized immune cells—all plant cells participate in the plant immune response. Chloroplasts, along with the nucleus, cell membrane, and endoplasmic reticulum, are key players in pathogen defense. Due to its role in a plant cell's immune response, pathogens frequently target the chloroplast. Plants have two main immune responses—the hypersensitive response, in which infected cells seal themselves off and undergo programmed cell death, and systemic acquired resistance, where infected cells release signals warning the rest of the plant of a pathogen's presence. Chloroplasts stimulate both responses by purposely damaging their photosynthetic system, producing reactive oxygen species. High levels of reactive oxygen species will cause the hypersensitive response. The reactive oxygen species also directly kill any pathogens within the cell. Lower levels of reactive oxygen species initiate systemic acquired resistance, triggering defense-molecule production in the rest of the plant. In some plants, chloroplasts are known to move closer to the infection site and the nucleus during an infection. Chloroplasts can serve as cellular sensors. After detecting stress in a cell, which might be due to a pathogen, chloroplasts begin producing molecules like salicylic acid, jasmonic acid, nitric oxide and reactive oxygen species which can serve as defense-signals. As cellular signals, reactive oxygen species are unstable molecules, so they probably don't leave the chloroplast, but instead pass on their signal to an unknown second messenger molecule. All these molecules initiate retrograde signaling—signals from the chloroplast that regulate gene expression in the nucleus. In addition to defense signaling, chloroplasts, with the help of the peroxisomes, help synthesize an important defense molecule, jasmonate. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate. One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Water (H2O) and carbon dioxide (CO2) are used in photosynthesis, and sugar and oxygen (O2) is made, using light energy. Photosynthesis is divided into two stages—the light reactions, where water is split to produce oxygen, and the dark reactions, or Calvin cycle, which builds sugar molecules from carbon dioxide. The two phases are linked by the energy carriers adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADP+). The light reactions take place on the thylakoid membranes. They take light energy and store it in NADPH, a form of NADP+, and ATP to fuel the dark reactions. ATP is the phosphorylated version of adenosine diphosphate (ADP), which stores energy in a cell and powers most cellular activities. ATP is the energized form, while ADP is the (partially) depleted form. NADP+ is an electron carrier which ferries high energy electrons. In the light reactions, it gets reduced, meaning it picks up electrons, becoming NADPH. Like mitochondria, chloroplasts use the potential energy stored in an H+, or hydrogen ion gradient to generate ATP energy. The two photosystems capture light energy to energize electrons taken from water, and release them down an electron transport chain. The molecules between the photosystems harness the electrons' energy to pump hydrogen ions into the thylakoid space, creating a concentration gradient, with more hydrogen ions (up to a thousand times as many) inside the thylakoid system than in the stroma. The hydrogen ions in the thylakoid space then diffuse back down their concentration gradient, flowing back out into the stroma through ATP synthase. ATP synthase uses the energy from the flowing hydrogen ions to phosphorylate adenosine diphosphate into adenosine triphosphate, or ATP. Because chloroplast ATP synthase projects out into the stroma, the ATP is synthesized there, in position to be used in the dark reactions. Electrons are often removed from the electron transport chains to charge NADP+ with electrons, reducing it to NADPH. Like ATP synthase, ferredoxin-NADP+ reductase, the enzyme that reduces NADP+, releases the NADPH it makes into the stroma, right where it is needed for the dark reactions. Because NADP+ reduction removes electrons from the electron transport chains, they must be replaced—the job of photosystem II, which splits water molecules (H2O) to obtain the electrons from its hydrogen atoms. While photosystem II photolyzes water to obtain and energize new electrons, photosystem I simply reenergizes depleted electrons at the end of an electron transport chain. Normally, the reenergized electrons are taken by NADP+, though sometimes they can flow back down more H+-pumping electron transport chains to transport more hydrogen ions into the thylakoid space to generate more ATP. This is termed cyclic photophosphorylation because the electrons are recycled. Cyclic photophosphorylation is common in plants, which need more ATP than NADPH. The Calvin cycle, also known as the dark reactions, is a series of biochemical reactions that fixes CO2 into G3P sugar molecules and uses the energy and electrons from the ATP and NADPH made in the light reactions. The Calvin cycle takes place in the stroma of the chloroplast. While named ""the dark reactions"", in most plants, they take place in the light, since the dark reactions are dependent on the products of the light reactions. The Calvin cycle starts by using the enzyme RuBisCO to fix CO2 into five-carbon Ribulose bisphosphate (RuBP) molecules. The result is unstable six-carbon molecules that immediately break down into three-carbon molecules called 3-phosphoglyceric acid, or 3-PGA. The ATP and NADPH made in the light reactions is used to convert the 3-PGA into glyceraldehyde-3-phosphate, or G3P sugar molecules. Most of the G3P molecules are recycled back into RuBP using energy from more ATP, but one out of every six produced leaves the cycle—the end product of the dark reactions. Glyceraldehyde-3-phosphate can double up to form larger sugar molecules like glucose and fructose. These molecules are processed, and from them, the still larger sucrose, a disaccharide commonly known as table sugar, is made, though this process takes place outside of the chloroplast, in the cytoplasm. Alternatively, glucose monomers in the chloroplast can be linked together to make starch, which accumulates into the starch grains found in the chloroplast. Under conditions such as high atmospheric CO2 concentrations, these starch grains may grow very large, distorting the grana and thylakoids. The starch granules displace the thylakoids, but leave them intact. Waterlogged roots can also cause starch buildup in the chloroplasts, possibly due to less sucrose being exported out of the chloroplast (or more accurately, the plant cell). This depletes a plant's free phosphate supply, which indirectly stimulates chloroplast starch synthesis. While linked to low photosynthesis rates, the starch grains themselves may not necessarily interfere significantly with the efficiency of photosynthesis, and might simply be a side effect of another photosynthesis-depressing factor. Photorespiration can occur when the oxygen concentration is too high. RuBisCO cannot distinguish between oxygen and carbon dioxide very well, so it can accidentally add O2 instead of CO2 to RuBP. This process reduces the efficiency of photosynthesis—it consumes ATP and oxygen, releases CO2, and produces no sugar. It can waste up to half the carbon fixed by the Calvin cycle. Several mechanisms have evolved in different lineages that raise the carbon dioxide concentration relative to oxygen within the chloroplast, increasing the efficiency of photosynthesis. These mechanisms are called carbon dioxide concentrating mechanisms, or CCMs. These include Crassulacean acid metabolism, carbon fixation, and pyrenoids. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism. Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8. The optimal stroma pH for the Calvin cycle is 8.1, with the reaction nearly stopping when the pH falls below 7.3. CO2 in water can form carbonic acid, which can disturb the pH of isolated chloroplasts, interfering with photosynthesis, even though CO2 is used in photosynthesis. However, chloroplasts in living plant cells are not affected by this as much. Chloroplasts can pump K+ and H+ ions in and out of themselves using a poorly understood light-driven transport system. In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit. Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Cysteine is made in the chloroplast (the proplastid too) but it is also synthesized in the cytosol and mitochondria, probably because it has trouble crossing membranes to get to where it is needed. The chloroplast is known to make the precursors to methionine but it is unclear whether the organelle carries out the last leg of the pathway or if it happens in the cytosol. Chloroplasts make all of a cell's purines and pyrimidines—the nitrogenous bases found in DNA and RNA. They also convert nitrite (NO2−) into ammonia (NH3) which supplies the plant with nitrogen to make its amino acids and nucleotides. The plastid is the site of diverse and complex lipid synthesis in plants. The carbon used to form the majority of the lipid is from acetyl-CoA, which is the decarboxylation product of pyruvate. Pyruvate may enter the plastid from the cytosol by passive diffusion through the membrane after production in glycolysis. Pyruvate is also made in the plastid from phosphoenolpyruvate, a metabolite made in the cytosol from pyruvate or PGA. Acetate in the cytosol is unavailable for lipid biosynthesis in the plastid. The typical length of fatty acids produced in the plastid are 16 or 18 carbons, with 0-3 cis double bonds. The biosynthesis of fatty acids from acetyl-CoA primarily requires two enzymes. Acetyl-CoA carboxylase creates malonyl-CoA, used in both the first step and the extension steps of synthesis. Fatty acid synthase (FAS) is a large complex of enzymes and cofactors including acyl carrier protein (ACP) which holds the acyl chain as it is synthesized. The initiation of synthesis begins with the condensation of malonyl-ACP with acetyl-CoA to produce ketobutyryl-ACP. 2 reductions involving the use of NADPH and one dehydration creates butyryl-ACP. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration. Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites. Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. There are many other types of plastids, which carry out various functions. All chloroplasts in a plant are descended from undifferentiated proplastids found in the zygote, or fertilized egg. Proplastids are commonly found in an adult plant's apical meristems. Chloroplasts do not normally develop from proplastids in root tip meristems—instead, the formation of starch-storing amyloplasts is more common. In shoots, proplastids from shoot apical meristems can gradually develop into chloroplasts in photosynthetic leaf tissues as the leaf matures, if exposed to the required light. This process involves invaginations of the inner plastid membrane, forming sheets of membrane that project into the internal stroma. These membrane sheets then fold to form thylakoids and grana. If angiosperm shoots are not exposed to the required light for chloroplast formation, proplastids may develop into an etioplast stage before becoming chloroplasts. An etioplast is a plastid that lacks chlorophyll, and has inner membrane invaginations that form a lattice of tubes in their stroma, called a prolamellar body. While etioplasts lack chlorophyll, they have a yellow chlorophyll precursor stocked. Within a few minutes of light exposure, the prolamellar body begins to reorganize into stacks of thylakoids, and chlorophyll starts to be produced. This process, where the etioplast becomes a chloroplast, takes several hours. Gymnosperms do not require light to form chloroplasts. Light, however, does not guarantee that a proplastid will develop into a chloroplast. Whether a proplastid develops into a chloroplast some other kind of plastid is mostly controlled by the nucleus and is largely influenced by the kind of cell it resides in. Plastid differentiation is not permanent, in fact many interconversions are possible. Chloroplasts may be converted to chromoplasts, which are pigment-filled plastids responsible for the bright colors seen in flowers and ripe fruit. Starch storing amyloplasts can also be converted to chromoplasts, and it is possible for proplastids to develop straight into chromoplasts. Chromoplasts and amyloplasts can also become chloroplasts, like what happens when a carrot or a potato is illuminated. If a plant is injured, or something else causes a plant cell to revert to a meristematic state, chloroplasts and other plastids can turn back into proplastids. Chloroplast, amyloplast, chromoplast, proplast, etc., are not absolute states—intermediate forms are common. Most chloroplasts in a photosynthetic cell do not develop directly from proplastids or etioplasts. In fact, a typical shoot meristematic plant cell contains only 7–20 proplastids. These proplastids differentiate into chloroplasts, which divide to create the 30–70 chloroplasts found in a mature photosynthetic plant cell. If the cell divides, chloroplast division provides the additional chloroplasts to partition between the two daughter cells. In single-celled algae, chloroplast division is the only way new chloroplasts are formed. There is no proplastid differentiation—when an algal cell divides, its chloroplast divides along with it, and each daughter cell receives a mature chloroplast. Almost all chloroplasts in a cell divide, rather than a small group of rapidly dividing chloroplasts. Chloroplasts have no definite S-phase—their DNA replication is not synchronized or limited to that of their host cells. Much of what we know about chloroplast division comes from studying organisms like "Arabidopsis" and the red alga "Cyanidioschyzon merolæ". The division process starts when the proteins FtsZ1 and FtsZ2 assemble into filaments, and with the help of a protein ARC6, form a structure called a Z-ring within the chloroplast's stroma. The Min system manages the placement of the Z-ring, ensuring that the chloroplast is cleaved more or less evenly. The protein MinD prevents FtsZ from linking up and forming filaments. Another protein ARC3 may also be involved, but it is not very well understood. These proteins are active at the poles of the chloroplast, preventing Z-ring formation there, but near the center of the chloroplast, MinE inhibits them, allowing the Z-ring to form. Next, the two plastid-dividing rings, or PD rings form. The inner plastid-dividing ring is located in the inner side of the chloroplast's inner membrane, and is formed first. The outer plastid-dividing ring is found wrapped around the outer chloroplast membrane. It consists of filaments about 5 nanometers across, arranged in rows 6.4 nanometers apart, and shrinks to squeeze the chloroplast. This is when chloroplast constriction begins. In a few species like "Cyanidioschyzon merolæ", chloroplasts have a third plastid-dividing ring located in the chloroplast's intermembrane space. Late into the constriction phase, dynamin proteins assemble around the outer plastid-dividing ring, helping provide force to squeeze the chloroplast. Meanwhile, the Z-ring and the inner plastid-dividing ring break down. During this stage, the many chloroplast DNA plasmids floating around in the stroma are partitioned and distributed to the two forming daughter chloroplasts. Later, the dynamins migrate under the outer plastid dividing ring, into direct contact with the chloroplast's outer membrane, to cleave the chloroplast in two daughter chloroplasts. A remnant of the outer plastid dividing ring remains floating between the two daughter chloroplasts, and a remnant of the dynamin ring remains attached to one of the daughter chloroplasts. Of the five or six rings involved in chloroplast division, only the outer plastid-dividing ring is present for the entire constriction and division phase—while the Z-ring forms first, constriction does not begin until the outer plastid-dividing ring forms. In species of algae that contain a single chloroplast, regulation of chloroplast division is extremely important to ensure that each daughter cell receives a chloroplast—chloroplasts can't be made from scratch. In organisms like plants, whose cells contain multiple chloroplasts, coordination is looser and less important. It is likely that chloroplast and cell division are somewhat synchronized, though the mechanisms for it are mostly unknown. Light has been shown to be a requirement for chloroplast division. Chloroplasts can grow and progress through some of the constriction stages under poor quality green light, but are slow to complete division—they require exposure to bright white light to complete division. Spinach leaves grown under green light have been observed to contain many large dumbbell-shaped chloroplasts. Exposure to white light can stimulate these chloroplasts to divide and reduce the population of dumbbell-shaped chloroplasts. Like mitochondria, chloroplasts are usually inherited from a single parent. Biparental chloroplast inheritance—where plastid genes are inherited from both parent plants—occurs in very low levels in some flowering plants. Many mechanisms prevent biparental chloroplast DNA inheritance, including selective destruction of chloroplasts or their genes within the gamete or zygote, and chloroplasts from one parent being excluded from the embryo. Parental chloroplasts can be sorted so that only one type is present in each offspring. Gymnosperms, such as pine trees, mostly pass on chloroplasts paternally, while flowering plants often inherit chloroplasts maternally. Flowering plants were once thought to only inherit chloroplasts maternally. However, there are now many documented cases of angiosperms inheriting chloroplasts paternally. Angiosperms, which pass on chloroplasts maternally, have many ways to prevent paternal inheritance. Most of them produce sperm cells that do not contain any plastids. There are many other documented mechanisms that prevent paternal inheritance in these flowering plants, such as different rates of chloroplast replication within the embryo. Among angiosperms, paternal chloroplast inheritance is observed more often in hybrids than in offspring from parents of the same species. This suggests that incompatible hybrid genes might interfere with the mechanisms that prevent paternal inheritance. Recently, chloroplasts have caught attention by developers of genetically modified crops. Since, in most flowering plants, chloroplasts are not inherited from the male parent, transgenes in these plastids cannot be disseminated by pollen. This makes plastid transformation a valuable tool for the creation and cultivation of genetically modified plants that are biologically contained, thus posing significantly lower environmental risks. This biological containment strategy is therefore suitable for establishing the coexistence of conventional and organic agriculture. While the reliability of this mechanism has not yet been studied for all relevant crop species, recent results in tobacco plants are promising, showing a failed containment rate of transplastomic plants at 3 in 1,000,000.
https://en.wikipedia.org/wiki?curid=6355
Camp David Camp David is the country retreat for the president of the United States. It is located in the wooded hills of Catoctin Mountain Park, in Frederick County, Maryland, near the towns of Thurmont, and Emmitsburg, about 62 miles (100 km) north-northwest of the national capital city of Washington, D.C. It is officially known as the Naval Support Facility Thurmont. Because it is technically a military installation, the staffing is primarily provided by the Seabees, Civil Engineer Corps (CEC), the United States Navy and the United States Marine Corps. Naval construction battalions are tasked with base construction and send detachments as needed. Originally known as Hi-Catoctin, Camp David was built as a camp for federal government agents and their families by the Works Progress Administration. Construction started in 1935 and was completed in 1938. In 1942, President Franklin D. Roosevelt converted it to a presidential retreat and renamed it "Shangri-La" (for the fictional Himalayan paradise in the 1933 novel "Lost Horizon" by British author James Hilton). Camp David received its present name in 1953 from Dwight D. Eisenhower, in honor of his father, and grandson, both named David.. Eisenhower had the practice golf facility built at Camp David. The Catoctin Mountain Park does not indicate the location of Camp David on park maps due to privacy and security concerns, although it can be seen through the use of publicly accessible satellite images. Franklin D. Roosevelt hosted Sir Winston Churchill at Shangri-La in May 1943. Dwight Eisenhower held his first cabinet meeting there on November 22, 1955 following hospitalization and convalescence he required after a heart attack suffered in Denver, Colorado on September 24. Eisenhower met there with Nikita Khrushchev for two days of discussions in September 1959.. John F. Kennedy and his family often enjoyed riding, golf and other recreational activities there, and Kennedy often allowed White House staff and Cabinet members to use the retreat when he or his family were not there. Lyndon B. Johnson met with advisors in this setting and hosted both Australian prime minister Harold Holt and Canadian prime minister Lester B. Pearson there. Richard Nixon was a frequent visitor. He personally directed the construction of a swimming pool and other improvements to Aspen Lodge. Gerald Ford hosted Indonesian president Suharto at Camp David. Jimmy Carter initially favored closing Camp David in order to save money. Once Carter actually visited the place, he decided to keep it. Carter brokered the Camp David Accords there in September 1978 between Egyptian president Anwar al-Sadat and Israeli prime minister Menachem Begin. Ronald Reagan visited the retreat more than any other president. In 1984, Reagan hosted British prime minister Margaret Thatcher. Reagan restored the nature trails that Nixon paved over so he could horseback ride at Camp David. George H. W. Bush's daughter, Dorothy Bush Koch, was married there in 1992, in the first wedding held at Camp David. Bush Sr. was a golfer. During his tenure as president, Bill Clinton spent every Thanksgiving at Camp David with his family. In July 2000, he hosted the 2000 Camp David Summit negotiations between Israeli prime minister Ehud Barak and Palestinian Authority chairman Yasser Arafat there. In February 2001, George W. Bush held his first meeting with a European leader, British prime minister Tony Blair, at Camp David to discuss missile defense, Iraq, and NATO. During his two terms in office, Bush visited Camp David 149 times, for a total of 487 days, for hosting foreign visitors as well as a personal retreat. He met there with Blair four times. Among the numerous other foreign leaders he hosted at Camp David were Russian president Vladimir Putin and President Musharraf of Pakistan in 2003, Danish prime minister Anders Fogh Rasmussen in June 2006, and British prime minister Gordon Brown in 2007. Barack Obama chose Camp David to host the 38th G8 summit in 2012. President Obama also hosted Russian prime minister Dmitry Medvedev at Camp David, as well as the GCC Summit there in 2015. Donald Trump hosted congressional leaders at Camp David as Republicans prepared to defend both houses of Congress in the 2018 midterm elections. The 46th G7 summit was to be held at Camp David on June 10–12, 2020, but was cancelled due to health concerns during the ongoing COVID-19 pandemic. To be able to play his favorite sport, President Eisenhower had golf course architect Robert Trent Jones design a practice golf facility at Camp David. Around 1954, Jones built one golf hole – a par 3 – with four different tees; Eisenhower added a 250-yard (228.6 m) driving range near the helicopter landing zone. On July 2, 2011, an F-15 intercepted a civilian aircraft approximately from Camp David, when President Obama was in the residence. The two-seater, which was out of radio communication, was escorted to nearby Hagerstown, Maryland, without incident. On July 10, 2011, an F-15 intercepted another small plane near Camp David when Obama was again in the residence; a total of three were intercepted that weekend.
https://en.wikipedia.org/wiki?curid=6357
Crux Crux () is a constellation centred on four stars in the southern sky in a bright portion of the Milky Way. It is among the most easily distinguished constellations as its hallmark (asterism) stars each have an apparent visual magnitude brighter than +2.8, even though it is the smallest of all 88 modern constellations. Its name is Latin for cross, and it is dominated by a cross-shaped or kite-like asterism that is commonly known as the Southern Cross. Predominating the constellation is the first-magnitude blue-white star of α Crucis (Acrux), its brightest and most southerly member. There follow four less dominant stars which appear clockwise and in order of lessening magnitude: β Crucis (Mimosa), γ Crucis (Gacrux), δ Crucis (Imai) and ε Crucis (Ginan). Many of these brighter stars are members of the Scorpius–Centaurus Association, a large but loose group of hot blue-white stars that appear to share common origins and motion across the southern Milky Way. Crux contains four Cepheid variables, each visible to the naked eye under optimum conditions. Crux also contains the bright and colourful open cluster known as the Jewel Box (NGC 4755) on its western border. To the southeast figures a large, relatively near dark nebula spanning 7° by 5° known as the Coalsack Nebula, portions of which are mapped in the neighbouring constellations of Centaurus and Musca. The bright stars in Crux were known to the Ancient Greeks, where Ptolemy regarded them as part of the constellation Centaurus. They were entirely visible as far north as Britain in the fourth millennium BC. However, the precession of the equinoxes gradually lowered the stars below the European horizon, and they were eventually forgotten by the inhabitants of northern latitudes. By 400 CE, the stars in the constellation we now call Crux never rose above the horizon throughout most of Europe. Dante may have known about the constellation in the 14th century, as he describes an asterism of four bright stars in the southern sky in his Divine Comedy. Others argue that Dante's description was allegorical, and that he almost certainly did not know about the constellation. The 15th century Venetian navigator Alvise Cadamosto made note of what was probably the Southern Cross on exiting the Gambia River in 1455, calling it the "carro dell'ostro" ("southern chariot"). However, Cadamosto's accompanying diagram was inaccurate. Historians generally credit João Faras for being the first European to depict it correctly. Faras sketched and described the constellation (calling it "Las Guardas") in a letter written on the beaches of Brazil on 1 May 1500 to the Portuguese monarch. Explorer Amerigo Vespucci seems to have observed not only the Southern Cross but also the neighboring Coalsack Nebula on his second voyage in 1501–1502. Another early modern description clearly describing Crux as a separate constellation is attributed to Andrea Corsali, an Italian navigator who from 1515–1517 sailed to China and the East Indies in an expedition sponsored by King Manuel I. In 1516, Corsali wrote a letter to the monarch describing his observations of the southern sky, which included a rather crude map of the stars around the south celestial pole including the Southern Cross and the two Magellanic Clouds seen in an external orientation, as on a globe. Emery Molyneux and Petrus Plancius have also been cited as the first uranographers (sky mappers) to distinguish Crux as a separate constellation; their representations date from 1592, the former depicting it on his celestial globe and the latter in one of the small celestial maps on his large wall map. Both authors, however, depended on unreliable sources and placed Crux in the wrong position. Crux was first shown in its correct position on the celestial globes of Petrus Plancius and Jodocus Hondius in 1598 and 1600. Its stars were first catalogued separately from Centaurus by Frederick de Houtman in 1603. The constellation was later adopted by Jakob Bartsch in 1624 and Augustin Royer in 1679. Royer is sometimes wrongly cited as initially distinguishing Crux. Crux is bordered by the constellations Centaurus (which surrounds it on three sides) on the east, north and west, and Musca to the south. Covering 68 square degrees and 0.165% of the night sky, it is the smallest of the 88 constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Cru". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of four segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −55.68° and −64.70°. Its totality figures at least part of the year south of the 25th parallel north. In tropical regions Crux can be seen in the sky from April to June. Crux is exactly opposite to Cassiopeia on the celestial sphere, and therefore it cannot appear in the sky with the latter at the same time. In this era, south of Cape Town, Adelaide and Buenos Aires (the 34th parallel south), Crux is circumpolar and thus always appears in the sky. Crux is sometimes confused with the nearby False Cross by stargazers. The False Cross is larger and dimmer, does not have a fifth star, and lacks the two prominent nearby "Pointer Stars". Between the two is the even larger and dimmer Diamond Cross. Crux is easily visible from the southern hemisphere at practically any time of year. It is also visible near the horizon from tropical latitudes of the northern hemisphere for a few hours every night during the northern winter and spring. For instance, it is visible from Cancun or any other place at latitude 25° N or less at around 10 pm at the end of April. There are 5 main stars. Due to precession, Crux will move closer to the South Pole in the next millennia, up to 67 degrees south declination for the middle of the constellation. However, by the year 14,000 Crux will be visible for most parts of Europe and continental United States which will extend to North Europe by the year 18,000 as it will be less than 30 degrees south declination. In the Southern Hemisphere, the Southern Cross is frequently used for navigation in much the same way that Polaris is used in the Northern Hemisphere. Projecting a line from γ to α Crucis (the foot of the crucifix) approximately times beyond gives a point close to the Southern Celestial Pole which is also, coincidentally, where intersects a perpendicular line taken southwards from the east-west axis of Alpha Centauri to Beta Centauri, which are stars at an alike declination to Crux and of a similar width as the cross, but higher magnitude. Argentine gauchos are documented as using Crux for night orientation in the Pampas and Patagonia. Alpha and Beta Centauri are of similar declinations (thus distance from the pole) are often referred as the "Southern Pointers" or just "The Pointers", allowing people to easily identify the Southern Cross, the constellation of Crux. Very few bright stars of importance lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately beneath Crux. Down to apparent magnitude +2.5 are 92 stars that shine the brightest as viewed from the Earth. Three of these stars are in Crux making it the most densely populated as to those stars (this being 3.26% of these 92 stars, and in turn being 19.2 times more than the expected 0.17% that would result on a homogenous distribution of all bright stars and a randomised drawing of all 88 constellations, given its area, 0.17% of the sky). Within the constellation's borders, there are 49 stars brighter than or equal to apparent magnitude 6.5. The four main stars that form the asterism are Alpha, Beta, Gamma, and Delta Crucis. There is also a fifth star, that is often included with the Southern Cross. There are several other naked-eye stars within the borders of Crux, especially: Unusually, a total of 15 of the 23 brightest stars in Crux are spectrally blue-white B-type stars. Among the five main bright stars, Delta, and probably Alpha and Beta, are likely co-moving B-type members of the Scorpius–Centaurus Association, the nearest OB association to the Sun. They are among the highest-mass stellar members of the Lower Centaurus-Crux subgroup of the association, with ages of roughly 10 to 20 million years. Other members include the blue-white stars Zeta, Lambda and both the components of the visual double star, Mu. Crux contains many variable stars. It boasts four Cepheid variables that may all reach naked eye visibility. Other well studied variable stars includes: The star HD 106906 has been found to have a planet—HD 106906 b—that has one of the widest orbits of any currently known planetary-mass companions.. Crux is backlit by the multitude of stars of the Scutum-Crux Arm (more commonly called Scutum-Centaurus) Arm of the Milky Way. As to the local radial quarter of the galaxy, this is the main inner arm. Part-obscuring this is: A key feature of the Arm mentioned relatively near to our position is: The most prominent feature of Crux is the distinctive asterism known as the Southern Cross. It has great significance in the cultures of the southern hemisphere, particularly of Australia and New Zealand. Several southern countries and organisations have traditionally used Crux as a national or distinctive symbol. The four or five brightest stars of Crux appear, heraldically standardised in various ways, on the flags of Australia, Brazil, New Zealand, Papua New Guinea and Samoa. They also appear on the flags of the Australian state of Victoria, the Australian Capital Territory, the Northern Territory, as well as the flag of Magallanes Region of Chile, the flag of Londrina (Brazil) and several Argentine provincial flags and emblems (for example, "Tierra del Fuego" and "Santa Cruz"). The flag of the Mercosur trading zone displays the four brightest stars. Crux also appears on the Brazilian coat of arms and, , on the cover of Brazilian passports. Five stars appear in the logo of the Brazilian football team Cruzeiro Esporte Clube and in the Brazilian coat of arms, and the cross has featured as name of the Brazilian currency (the "cruzeiro" from 1942 to 1986 and again from 1990 to 1994). All coins of the (1998) series of the Brazilian real display the constellation. Songs and literature reference the Southern Cross, including the Argentine epic poem "Martín Fierro". The Argentinian singer Charly García says that he is "from the Southern Cross" in the song "No voy en tren". The Cross gets a mention in the lyrics of the Brazilian National Anthem (1909): ""A imagem do Cruzeiro resplandece"" ("the image of the Cross shines"). The Southern Cross is mentioned in the Australian National Anthem, ""Beneath our radiant Southern Cross we'll toil with hearts and hands"" The Southern Cross is also mentioned in the Samoan National Anthem. ""Vaai 'i na fetu o lo'u a agiagia ai: Le faailoga lea o Iesu, na maliu ai mo Samoa."" ("Look at those stars that are waving on it: This is the symbol of Jesus, who died on it for Samoa.") "Southern Cross" is a single released by Crosby, Stills and Nash in 1981. It reached #18 on Billboard Hot 100 in late 1982. The Order of the Southern Cross is a Brazilian order of chivalry awarded to "those who have rendered significant service to the Brazilian nation". In "O Sweet Saint Martin's Land", the lyrics mention the Southern Cross: "Thy Southern Cross the night". A stylized version of Crux appears on the Australian Eureka Flag. The constellation was also used on the dark blue, shield-like patch worn by personnel of the U.S. Army's Americal Division, which was organized in the Southern Hemisphere, on the island of New Caledonia, and also on the blue diamond of the U.S. 1st Marine Division, which fought on the Southern Hemisphere islands of Guadalcanal and New Britain. The "Petersflagge" flag of the German East Africa Company of 1885–1920, which included a constellation of five white five-pointed Crux "stars" on a red ground, later served as the model for symbolism associated with generic German colonial-oriented organisations: the Reichskolonialbund of 1936–1943 and the (1956/1983 to the present). Southern Cross station is a major rail terminal in Melbourne, Australia. The Personal Ordinariate of Our Lady of the Southern Cross is a personal ordinariate of the Roman Catholic Church primarily within the territory of the Australian Catholic Bishops Conference for groups of Anglicans who desire full communion with the Catholic Church in Australia and Asia. The Knights of the Southern Cross (KSC) is a Catholic fraternal order throughout Australia. In Australian Aboriginal astronomy, Crux and the Coalsack mark the head of the 'Emu in the Sky' (which is seen in the dark spaces rather than in the patterns of stars) in several Aboriginal cultures, while Crux itself is said to be a possum sitting in a tree (Boorong people of the Wimmera region of northwestern Victoria), a representation of the sky deity Mirrabooka (Quandamooka people of Stradbroke Island), a stingray (Yolngu people of Arnhem Land), or an eagle (Kaurna people of the Adelaide Plains). Two Pacific constellations also included Gamma Centauri. Torres Strait Islanders in modern-day Australia saw Gamma Centauri as the handle and the four stars as the trident of Tagai's Fishing Spear. The Aranda people of central Australia saw the four Cross stars as the talon of an eagle and Gamma Centauri as its leg. Various peoples in the East Indies and Brazil viewed the four main stars as the body of a ray. In both Indonesia and Malaysia, it is known as "Bintang Pari" and "Buruj Pari", respectively ("ray stars"). This aquatic theme is also shared by an archaic name of the constellation in Vietnam, where it was once known as "sao Cá Liệt" (the ponyfish star). The Javanese people of Indonesia called this constellation "Gubug pèncèng" ("raking hut") or "lumbung" ("the granary"), because the shape of the constellation was like that of a raking hut. The Southern Cross (α, β, γ and δ Crucis) together with μ Crucis is one of the asterisms used by Bugis sailors for navigation, called "bintoéng bola képpang", meaning "incomplete house star" The Māori name for the Southern Cross is "Māhutonga" and it is thought of as the anchor ("Te Punga") of Tama-rereti's "waka" (the Milky Way), while the Pointers are its rope. In Tonga it is known as "Toloa" ("duck"); it is depicted as a duck flying south, with one of his wings (δ Crucis) wounded because "Ongo tangata" ("two men", α and β Centauri) threw a stone at it. The Coalsack is known as "Humu" (the "triggerfish"), because of its shape. In Samoa the constellation is called "Sumu" ("triggerfish") because of its rhomboid shape, while α and β Centauri are called "Luatagata" (Two Men), just as they are in Tonga. The peoples of the Solomon Islands saw several figures in the Southern Cross. These included a knee protector and a net used to catch Palolo worms. Neighboring peoples in the Marshall Islands saw these stars as a fish. In Mapudungun, the language of Patagonian Mapuches, the name of the Southern Cross is "Melipal", which means "four stars". In Quechua, the language of the Inca civilization, Crux is known as "Chakana", which means literally "stair" ("chaka", bridge, link; "hanan", high, above), but carries a deep symbolism within Quechua mysticism. Alpha and Beta Crucis make up one foot of the Great Rhea, a constellation encompassing Centaurus and Circinus along with the two bright stars. The Great Rhea was a constellation of the Bororo of Brazil. The Mocoví people of Argentina also saw a rhea including the stars of Crux. Their rhea is attacked by two dogs, represented by bright stars in Centaurus and Circinus. The dogs' heads are marked by Alpha and Beta Centauri. The rhea's body is marked by the four main stars of Crux, while its head is Gamma Centauri and its feet are the bright stars of Musca. The Bakairi people of Brazil had a sprawling constellation representing a bird snare. It included the bright stars of Crux, the southern part of Centaurus, Circinus, at least one star in Lupus, the bright stars of Musca, Beta and the optical double star Delta1,2 Chamaeleontis: and some of the stars of Volans, and Mensa. The Kalapalo people of Mato Grosso state in Brazil saw the stars of Crux as "Aganagi" angry bees having emerged from the Coalsack, which they saw as the beehive. Among Tuaregs, the four most visible stars of Crux are considered "iggaren", i.e. four "Maerua crassifolia" trees. The Tswana people of Botswana saw the constellation as "Dithutlwa", two giraffes – Alpha and Beta Crucis forming a male, and Gamma and Delta forming the female.
https://en.wikipedia.org/wiki?curid=6359
Cetus Cetus () is a constellation. (The) Cetus was a sea monster in Greek mythology as both Perseus and Heracles needed to slay, sometimes in English called 'the whale'. Cetus is in the region of the sky that contains other water-related constellations: Aquarius, Pisces and Eridanus. Cetus is not among the 13 true zodiac constellations in the J2000 epoch, nor classical 12-part zodiac. The ecliptic passes less than 0.25° from one of its corners. Thus the moon and planets will enter Cetus (occulting any stars as a foreground object) in 50% of their successive orbits briefly and the southern part of the sun appears in Cetus for about one day each year. Many asteroids in belts have longer phases occulting the north-western part of Cetus, that bulk with a slightly greater inclination to the ecliptic than the moon and planets. As seen from Mars, the ecliptic (apparent plane of the sun and also the average plane of the planets which is almost the same) passes into Cetus - the centre of the sun is a foreground object in Cetus for around six days shortly after the northern summer solstice. Mars's orbit is tilted by 1.85° with respect to Earth's - Mars has relatively great 'inclination', that is, is marginally inclined away from the ecliptic. Mira ("wonderful", named by Bayer: Omicron Ceti, a star of the neck of the asterism) was the first variable star to be discovered and the prototype of its class, Mira variables. Over a period of 332 days, it reaches a maximum apparent magnitude of 3 - visible to the naked eye - and dips to a minimum magnitude of 10, invisible to the unaided eye. Its seeming appearance and disappearance gave it its name. Mira pulsates with a minimum size of 400 solar diameters and a maximum size of 500 solar diameters. 420 light-years from Earth, it was discovered by David Fabricius in 1596. α Ceti, traditionally called Menkar ("the nose"), is a red-hued giant star of magnitude 2.5, 220 light-years from Earth. It is a wide double star; the secondary is 93 Ceti, a blue-white hued star of magnitude 5.6, 440 light-years away. β Ceti, also called Deneb Kaitos and Diphda is the brightest star in Cetus. It is an orange-hued giant star of magnitude 2.0, 96 light-years from Earth. The traditional name "Deneb Kaitos" means "the whale's tail". γ Ceti, Kaffaljidhma ("head of the whale") is a very close double star. The primary is a yellow-hued star of magnitude 3.5, 82 light-years from Earth, and the secondary is a blue-hued star of magnitude 6.6. Tau Ceti is noted for being the nearest Sun-like star at a distance of 11.9 light-years. It is a yellow-hued main-sequence star of magnitude 3.5. AA Ceti is a triple star system; the brightest member has a magnitude of 6.2. The primary and secondary are separated by 8.4 arcseconds at an angle of 304 degrees. The tertiary is not visible in telescopes. AA Ceti is an eclipsing variable star; the tertiary star passes in front of the primary and causes the system's apparent magnitude to decrease by 0.5 magnitudes. UV Ceti is an unusual binary variable star. 8.7 light-years from Earth, the system consists of two red dwarfs. Both of magnitude 13. One of the stars is a flare star, which are prone to sudden, random outbursts that last several minutes; these increase the pair's apparent brightness significantly - as high as magnitude 7. Cetus lies far from the galactic plane, so that many distant galaxies are visible, unobscured by dust from the Milky Way. Of these, the brightest is Messier 77 (NGC 1068), a 9th magnitude spiral galaxy near Delta Ceti. It appears face-on and has a clearly visible nucleus of magnitude 10. About 50 million light-years from Earth, M77 is also a Seyfert galaxy and thus a bright object in the radio spectrum. Recently, the galactic cluster JKCS 041 was confirmed to be the most distant cluster of galaxies yet discovered. The massive cD galaxy Holmberg 15A is also found in Cetus. As is spiral galaxy NGC 1042 and ultra-diffuse galaxy NGC 1052-DF2. IC 1613 (Caldwell 51) is an irregular dwarf galaxy near the star 26 Ceti and is a member of the Local Group. NGC 246 (Caldwell 56), also called the Cetus Ring, is a planetary nebula with a magnitude of 8.0, 1600 light-years from Earth. Among some amateur astronomers, NGC 246 has garnered the nickname "Pac-Man Nebula" because of the arrangement of its central stars and the surrounding star field. Cetus may have originally been associated with a whale, which would have had mythic status amongst Mesopotamian cultures. It is often now called the Whale, though it is most strongly associated with Cetus the sea-monster, who was slain by Perseus as he saved the princess Andromeda from Poseidon's wrath. Cetus is located in a region of the sky called "The Sea" because many water-associated constellations are placed there, including Eridanus, Pisces, Piscis Austrinus, Capricornus, and Aquarius. Cetus has been depicted in many ways throughout its history. In the 17th century, Cetus was depicted as a "dragon fish" by Johann Bayer. Both Willem Blaeu and Andreas Cellarius depicted Cetus as a whale-like creature in the same century. However, Cetus has also been variously depicted with animal heads attached to a piscine body. In Chinese astronomy, the stars of Cetus are found among two areas: the Black Tortoise of the North (北方玄武, "Běi Fāng Xuán Wǔ") and the White Tiger of the West (西方白虎, "Xī Fāng Bái Hǔ"). The Tukano and Kobeua people of the Amazon used the stars of Cetus to create a jaguar, representing the god of hurricanes and other violent storms. Lambda, Mu, Xi, Nu, Gamma, and Alpha Ceti represented its head; Omicron, Zeta, and Chi Ceti represented its body; Eta Eri, Tau Cet, and Upsilon Cet marked its legs and feet; and Theta, Eta, and Beta Ceti delineated its tail. In Hawaii, the constellation was called "Na Kuhi", and Mira (Omicron Ceti) may have been called "Kane". USS Cetus (AK-77) was a United States Navy Crater class cargo ship named after the constellation. "Cetus" is the title of a ragtime piano composition by Tom Brier on the album "Constellations".
https://en.wikipedia.org/wiki?curid=6362
Carina (constellation) Carina ( (U.S.) (Brit.)) is a constellation in the southern sky. Its name is Latin for the hull or keel of a ship, and it was the southern foundation of the larger constellation of Argo Navis (the ship "Argo") until it was divided into three pieces, the other two being Puppis (the poop deck), and Vela (the sails of the ship). Carina was once a part of Argo Navis, the great ship of Jason and the Argonauts who searched for the Golden Fleece. The constellation of Argo was introduced in ancient Greece. However, due to the massive size of Argo Navis and the sheer number of stars that required separate designation, Nicolas Louis de Lacaille divided Argo into three sections in 1763, including Carina (the hull or keel). In the 19th century, these three became established as separate constellations, and were formally included in the list of 88 modern IAU constellations in 1930. Lacaille kept a single set of Greek letters for the whole of Argo, and separate sets of Latin letter designations for each of the three sections. Therefore, Carina has the α, β and ε, Vela has γ and δ, Puppis has ζ, and so on. Carina contains Canopus, a white-hued supergiant that is the second brightest star in the night sky at magnitude −0.72. Alpha Carinae, as Canopus is formally designated, is 313 light-years from Earth. Its traditional name comes from the mythological Canopus, who was a navigator for Menelaus, king of Sparta. There are several other stars above magnitude 3 in Carina. Beta Carinae, traditionally called Miaplacidus, is a blue-white hued star of magnitude 1.7, 111 light-years from Earth. Epsilon Carinae is an orange-hued giant star similarly bright to Miaplacidus at magnitude 1.9; it is 630 light-years from Earth. Another fairly bright star is the blue-white hued Theta Carinae; it is a magnitude 2.7 star 440 light-years from Earth. Theta Carinae is also the most prominent member of the cluster IC 2602. Iota Carinae is a white-hued supergiant star of magnitude 2.2, 690 light-years from Earth. Eta Carinae is the most prominent variable star in Carina; with a mass of approximately 100 solar masses and 4 million times as bright as the Sun. It was first discovered to be unusual in 1677, when its magnitude suddenly rose to 4, attracting the attention of Edmond Halley. Eta Carinae is inside NGC 3372, commonly called the Carina Nebula. It had a long outburst in 1827, when it brightened to magnitude 1, only fading to magnitude 1.5 in 1828. Its most prominent outburst made Eta Carinae the equal of Sirius; it brightened to magnitude −1.5 in 1843. In the decades following 1843 it appeared relatively placid, having a magnitude between 6.5 and 7.9. However, in 1998, it brightened again, though only to magnitude 5.0, a far less drastic outburst. Eta Carinae is a binary star, with a companion that has a period of 5.5 years; the two stars are surrounded by the Homunculus Nebula, which is composed of gas that was ejected in 1843. There are several less prominent variable stars in Carina. l Carinae is a Cepheid variable noted for its brightness; it is the brightest Cepheid that is variable to the unaided eye. It is a yellow-hued supergiant star with a minimum magnitude of 4.2 and a maximum magnitude of 3.3; it has a period of 35.5 days. Two bright Mira variable stars are in Carina: R Carinae and S Carinae; both stars are red giants. R Carinae has a minimum magnitude of 10.0 and a maximum magnitude of 4.0. Its period is 309 days and it is 416 light-years from Earth. S Carinae is similar, with a minimum magnitude of 10.0 and a maximum magnitude of 5.0. However, S Carinae has a shorter period – 150 days, though it is much more distant at 1300 light-years from Earth. Carina is home to several double stars and binary stars. Upsilon Carinae is a binary star with two blue-white hued giant components, 1600 light-years from Earth. The primary is of magnitude 3.0 and the secondary is of magnitude 6.0; the two components are distinguishable in a small amateur telescope. Two asterisms are prominent in Carina. One is known as the 'Diamond Cross', which is larger than the Southern Cross (but fainter), and, from the perspective of the southern hemisphere viewer, upside down, the long axes of the two crosses being close to parallel. Another asterism in the constellation is the False Cross, often mistaken for the Southern Cross, which is an asterism in Crux. The False Cross consists of two stars in Carina, Iota Carinae and Epsilon Carinae, and two stars in Vela, Kappa Velorum and Delta Velorum. Carina is known for its namesake nebula, NGC 3372, discovered by French astronomer Nicolas Louis de Lacaille in 1751, which contains several nebulae. The Carina Nebula overall is an extended emission nebula approximately 8,000 light-years away and 300 light-years wide that includes vast star-forming regions. It has an overall magnitude of 8.0 and an apparent diameter of over 2 degrees. Its central region is called the Keyhole, or the Keyhole Nebula. This was described in 1847 by John Herschel, and likened to a keyhole by Emma Converse in 1873. The Keyhole is about seven light-years wide and is composed mostly of ionized hydrogen, with two major star-forming regions. The Homunculus Nebula is a planetary nebula visible to the naked eye that is being ejected by the erratic luminous blue variable star Eta Carinae, the most massive visible star known. Eta Carinae is so massive that it has reached the theoretical upper limit for the mass of a star and is therefore unstable. It is known for its outbursts; in 1840 it briefly became one of the brightest stars in the sky due to a particularly massive outburst, which largely created the Homunculus Nebula. Because of this instability and history of outbursts, Eta Carinae is considered a prime supernova candidate for the next several hundred thousand years because it has reached the end of its estimated million-year life span. NGC 2516 is an open cluster that is both quite large (approximately half a degree square) and bright, visible to the unaided eye. It is located 1100 light-years from Earth and has approximately 80 stars, the brightest of which is a red giant star of magnitude 5.2. NGC 3114 is another open cluster approximately of the same size, though it is more distant at 3000 light-years from Earth. It is more loose and dim than NGC 2516, as its brightest stars are only 6th magnitude. The most prominent open cluster in Carina is IC 2602, also called the "Southern Pleiades". It contains Theta Carinae, along with several other stars visible to the unaided eye. In total, the cluster possesses approximately 60 stars. The Southern Pleiades is particularly large for an open cluster, with a diameter of approximately one degree. Like IC 2602, NGC 3532 is visible to the unaided eye and is of comparable size. It possesses approximately 150 stars that are arranged in an unusual shape, approximating an ellipse with a dark central area. Several prominent orange giants are among the cluster's bright stars, of the 7th magnitude. Superimposed on the cluster is Chi Carinae, a yellow-white hued star of magnitude 3.9, far more distant than NGC 3532. Carina also contains the naked-eye globular cluster NGC 2808. Epsilon Carinae and Upsilon Carinae are double stars visible in small telescopes. One noted galaxy cluster is 1E 0657-56, the Bullet Cluster. At a distance of 4 billion light years (redshift 0.296), this galaxy cluster is named for the shock wave seen in the intracluster medium, which resembles the shock wave of a supersonic bullet. The bow shock visible is thought to be due to the smaller galaxy cluster moving through the intracluster medium at a relative speed of 3000–4000 kilometers per second to the larger cluster. Because this gravitational interaction has been ongoing for hundreds of millions of years, the smaller cluster is being destroyed and will eventually merge with the larger cluster. Carina contains the radiant of the Eta Carinids meteor shower, which peaks around January 21 each year. From China (especially northern China), the stars of Carina can barely be seen. The star Canopus (the south polar star in Chinese astronomy) was located by Chinese astronomers in the Vermilion Bird of the South (南方朱雀, "Nán Fāng Zhū Què"). The rest of the stars were first classified by Xu Guanggi during the Ming Dynasty, based on the knowledge acquired from western star charts, and placed among The Southern Asterisms (近南極星區, "Jìnnánjíxīngōu"). Polynesian peoples had no name for the constellation in particular, though they had many names for Canopus. The Māori name "Ariki" ("High-born"), . and the Hawaiian "Ke Alii-o-kona-i-ka-lewa", "The Chief of the southern expanse". both attest to the star's prominence in the southern sky, while the Māori "Atutahi", "First-light" or "Single-light", and the Tuamotu "Te Tau-rari" and "Marere-te-tavahi", "He-who-stands-alone". refer to the star's solitary nature. It was also called "Kapae-poto", ("Short horizon"), because it rarely sets from the vantage point of New Zealand; and "Kauanga" ("Solitary"), when it was the last star visible before sunrise. Carina is in the southern sky quite near the south celestial pole, making it never set (circumpolar) for most of the southern hemisphere. Due to precession of Earth's axis, by the year 4700 the south celestial pole will be in Carina. Three bright stars in Carina will come within 1 degree of the southern celestial pole and take turns as the southern pole star: Omega Carinae (mag 3.29) in 5600, Upsilon Carinae (mag 2.97) in 6700, and Iota Carinae (mag 2.21) in 7900. About 13860, the bright Canopus (-0.7) will have a greater declination than -82°. was a United States Navy Crater class cargo ship named after the constellation.
https://en.wikipedia.org/wiki?curid=6363