text
stringlengths
10
951k
source
stringlengths
39
44
North Korea North Korea (Korean: , MR: "Chosŏn"; literally , MR: "Pukchosŏn", or /, RR: "Bukhan" in South Korean usage), officially the Democratic People's Republic of Korea (DPRK or DPR Korea; Korean: , "Chosŏn Minjujuŭi Inmin Konghwaguk"), is a country in East Asia constituting the northern part of the Korean Peninsula. The country is bordered to the north by China and by Russia along the Amnok (known as the Yalu in Chinese) and Tumen rivers, and to the south by South Korea, with the heavily fortified Korean Demilitarized Zone (DMZ) separating the two. North Korea, like its southern counterpart, claims to be the legitimate government of the entire peninsula and adjacent islands. Pyongyang is the country's capital and largest city. In 1910, Korea was annexed by Imperial Japan. At the Japanese surrender at the end of World War II in 1945, Korea was divided into two zones, with the north occupied by the Soviet Union and the south occupied by the United States. Negotiations on reunification failed, and in 1948, separate governments were formed: the socialist Democratic People's Republic of Korea in the north, and the capitalist Republic of Korea in the south. An invasion initiated by North Korea led to the Korean War (1950–1953). The Korean Armistice Agreement brought about a ceasefire, but no peace treaty was signed. According to article 1 of the constitution of North Korea, the DPRK is an "independent socialist State". North Korea holds elections, though they have been described by independent observers as sham elections. North Korea is generally viewed as a totalitarian Stalinist dictatorship, particularly noting the elaborate cult of personality around the Kim dynasty. The Workers' Party of Korea (WPK), led by a member of the ruling family, holds absolute power in the state and leads the Democratic Front for the Reunification of the Fatherland of which all political officers are required to be members. According to article 3 of the constitution of the DPRK, "Kimilsungism-Kimjongilism" is the North Korean official ideology. The means of production are owned by the state through state-run enterprises and collectivized farms. Most services—such as healthcare, education, housing and food production—are subsidized or state-funded. From 1994 to 1998, North Korea suffered a famine that resulted in the deaths of between 240,000 and 420,000 people, and the population continues to suffer malnutrition. North Korea follows "Songun", or "military-first" policy. It is the country with the highest number of military and paramilitary personnel, with a total of 9,495,000 active, reserve and paramilitary personnel, or approximately of its population. Its active duty army of 1.21 million is the fourth-largest in the world, after China, the United States and India; consisting of of its population. It possesses nuclear weapons. In addition to being a member of the United Nations since 1991, North Korea is also a member of the Non-Aligned Movement, G77 and the ASEAN Regional Forum. A 2014 UN inquiry into abuses of human rights in North Korea concluded that, "the gravity, scale and nature of these violations reveal a state that does not have any parallel in the contemporary world," with Amnesty International and Human Rights Watch holding similar views. The North Korean government denies these abuses. The name "Korea" derives from the name "Goryeo" (also spelled "Koryŏ"). The name "Goryeo" itself was first used by the ancient kingdom of Goguryeo (Koguryŏ) which was one of the great powers in East Asia during its time, ruling most of the Korean Peninsula, Manchuria, parts of the Russian Far East and parts of Inner Mongolia, under Gwanggaeto the Great. The 10th-century kingdom of Goryeo succeeded Goguryeo, and thus inherited its name, which was pronounced by visiting Persian merchants as "Korea". The modern spelling of Korea first appeared in the late 17th century in the travel writings of the Dutch East India Company's Hendrick Hamel. After the division of the country into North and South Korea, the two sides used different terms to refer to Korea: "Chosun" or "Joseon" (조선) in North Korea, and "Hanguk" (한국) in South Korea. In 1948, North Korea adopted "Democratic People's Republic of Korea" (, "Chosŏn Minjujuŭi Inmin Konghwaguk"; ) as its new legal name. In the wider world, because the government controls the northern part of the Korean Peninsula, it is commonly called North Korea to distinguish it from South Korea, which is officially called the "Republic of Korea" in English. Both governments consider themselves to be the legitimate government of the whole of Korea. For this reason, the people do not consider themselves as 'North Koreans' but as Koreans in the same divided country as their compatriots in the South and foreign visitors are discouraged from using the former term. After the First Sino-Japanese War and the Russo-Japanese War, Korea was occupied by Japan from 1910 to 1945. Korean resistance groups known as Dongnipgun (Liberation Army) operated along the Sino-Korean border, fighting guerrilla warfare against Japanese forces. Some of them took part in allied action in China and parts of South East Asia. One of the guerrilla leaders was the communist Kim Il-sung, who later became the first leader of North Korea. After the Japanese surrender at the end of World War II in 1945, the Korean Peninsula was divided into two zones along the 38th parallel, with the northern half of the peninsula occupied by the Soviet Union and the southern half by the United States. Negotiations on reunification failed. Soviet general Terentii Shtykov recommended the establishment of the Soviet Civil Authority in October 1945, and supported Kim Il-sung as chairman of the Provisional People's Committee for North Korea, established in February 1946. In September 1946, South Korean citizens rose up against the Allied Military Government. In April 1948, an uprising of the Jeju islanders was violently crushed. The South declared its statehood in May 1948 and two months later the ardent anti-communist Syngman Rhee became its ruler. The Democratic People's Republic of Korea was established in the North on 9 September 1948. Shtykov served as the first Soviet ambassador, while Kim Il-sung became premier. Soviet forces withdrew from the North in 1948, and most American forces withdrew from the South in 1949. Ambassador Shtykov suspected Rhee was planning to invade the North and was sympathetic to Kim's goal of Korean unification under socialism. The two successfully lobbied Joseph Stalin to support a quick war against the South, which culminated in the outbreak of the Korean War. The military of North Korea invaded the South on 25 June 1950, and swiftly overran most of the country. A United Nations force, led by the United States, intervened to defend the South, and rapidly advanced into North Korea. As they neared the border with China, Chinese forces intervened on behalf of North Korea, shifting the balance of the war again. Fighting ended on 27 July 1953, with an armistice that approximately restored the original boundaries between North and South Korea, but no peace treaty was signed. Approximately 3 million people died in the Korean War, with a higher proportional civilian death toll than World War II or the Vietnam War, making it perhaps the deadliest conflict of the Cold War-era. In both per capita and absolute terms, North Korea was the country most devastated by the war, which resulted in the death of an estimated 12–15% of the North Korean population ( 10 million), "a figure close to or surpassing the proportion of Soviet citizens killed in World War II," according to Charles K. Armstrong. As a result of the war, almost every substantial building in North Korea was destroyed. Some have referred to the conflict as a civil war, with other factors involved. A heavily guarded demilitarized zone (DMZ) still divides the peninsula, and an anti-communist and anti-North Korea sentiment remains in South Korea. Since the war, the United States has maintained a strong military presence in the South which is depicted by the North Korean government as an imperialist occupation force. It claims that the Korean War was caused by the United States and South Korea. The relative peace between the South and the North following the armistice was interrupted by border skirmishes, celebrity abductions, and assassination attempts. The North failed in several assassination attempts on South Korean leaders, such as in 1968, 1974, and the Rangoon bombing in 1983; tunnels were found under the DMZ and tensions flared over the axe murder incident at Panmunjom in 1976. For almost two decades after the war, the two states did not seek to negotiate with one another. In 1971, secret, high-level contacts began to be conducted culminating in the 1972 July 4th North–South Joint Statement that established principles of working toward peaceful reunification. The talks ultimately failed because in 1973, South Korea declared its preference that the two Koreas should seek separate memberships in international organizations. During the 1956 August Faction Incident, Kim Il-sung successfully resisted efforts by the Soviet Union and China to depose him in favor of Soviet Koreans or the pro-Chinese Yan'an faction. The last Chinese troops withdrew from the country in October 1958, which is the consensus as the latest date when North Korea became effectively independent. Some scholars believe that the 1956 August incident demonstrated independence. North Korea remained closely aligned with China and the Soviet Union, and the Sino-Soviet split allowed Kim to play the powers off each other. North Korea sought to become a leader of the Non-Aligned Movement, and emphasized the ideology of "Juche" to distinguish it from both the Soviet Union and China. In United States policymaking, North Korea was considered among the Captive Nations. Recovery from the war was quick—by 1957 industrial production reached 1949 levels. In 1959, relations with Japan had improved somewhat, and North Korea began allowing the repatriation of Japanese citizens in the country. The same year, North Korea revalued the North Korean won, which held greater value than its South Korean counterpart. Until the 1960s, economic growth was higher than in South Korea, and North Korean GDP per capita was equal to that of its southern neighbor as late as 1976. However, by the 1980s, the economy had begun to stagnate; it started its long decline in 1987 and almost completely collapsed after the dissolution of the Soviet Union in 1991, when all Soviet aid was suddenly halted. In 1992, as Kim Il-sung's health began deteriorating, Kim Jong-il slowly began taking over various state tasks. Kim Il-sung died of a heart attack in 1994, with Kim Jong-il declaring a three-year period of national mourning before officially announcing his position as the new leader afterwards. North Korea promised to halt its development of nuclear weapons under the Agreed Framework, negotiated with U.S. president Bill Clinton and signed in 1994. Building on Nordpolitik, South Korea began to engage with the North as part of its Sunshine Policy. Kim Jong-il instituted a policy called "Songun", or "military first". There is much speculation about this policy being used as a strategy to strengthen the military while discouraging coup attempts. Flooding in the mid-1990s exacerbated the economic crisis, severely damaging crops and infrastructure and led to widespread famine which the government proved incapable of curtailing, resulting in the deaths of between 240,000 and 420,000 people. In 1996, the government accepted UN food aid. The international environment changed with the election of U.S. president George W. Bush in 2001. His administration rejected South Korea's Sunshine Policy and the Agreed Framework. The U.S. government treated North Korea as a rogue state, while North Korea redoubled its efforts to acquire nuclear weapons to avoid the fate of Iraq. On 9 October 2006, North Korea announced it had conducted its first nuclear weapons test. U.S. President Barack Obama adopted a policy of "strategic patience", resisting making deals with North Korea. Tensions with South Korea and the United States increased in 2010 with the sinking of the South Korean warship "Cheonan" and North Korea's shelling of Yeonpyeong Island. On 17 December 2011, Kim Jong-il died from a heart attack. His youngest son Kim Jong-un was announced as his successor. In the face of international condemnation, North Korea continued to develop its nuclear arsenal, possibly including a hydrogen bomb and a missile capable of reaching the United States. Throughout 2017, following Donald Trump's assumption of the US presidency, tensions between the United States and North Korea increased, and there was heightened rhetoric between the two, with Trump threatening "fire and fury" and North Korea threatening to test missiles that would land near Guam. The tensions substantially decreased in 2018, and a détente developed. A series of summits took place between Kim Jong-un of North Korea, President Moon Jae-in of South Korea, and President Trump. It has been since North Korea's last ICBM test. North Korea occupies the northern portion of the Korean Peninsula, lying between latitudes 37° and 43°N, and longitudes 124° and 131°E. It covers an area of . North Korea is bordered by China and by Russia along the Amnok (known as the Yalu in Chinese) and Tumen rivers and borders South Korea along the Korean Demilitarized Zone. To its west are the Yellow Sea and Korea Bay, and to its east lies Japan across the Sea of Japan (East Sea of Korea). Early European visitors to Korea remarked that the country resembled "a sea in a heavy gale" because of the many successive mountain ranges that crisscross the peninsula. Some 80 percent of North Korea is composed of mountains and uplands, separated by deep and narrow valleys. All of the Korean Peninsula's mountains with elevations of or more are located in North Korea. The highest point in North Korea is Paektu Mountain, a volcanic mountain with an elevation of above sea level. Considered a sacred place by North Koreans, Mount Paektu holds significance in Korean culture and has been incorporated in the elaborate folklore and cult personality around the Kim dynasty. For example, the song, "We Will Go To Mount Paektu" sings in praise of Kim Jong-un and describes a symbolic trek to the mountain. Other prominent ranges are the Hamgyong Range in the extreme northeast and the Rangrim Mountains, which are located in the north-central part of North Korea. Mount Kumgang in the Taebaek Range, which extends into South Korea, is famous for its scenic beauty. The coastal plains are wide in the west and discontinuous in the east. A great majority of the population lives in the plains and lowlands. According to a United Nations Environmental Programme report in 2003, forest covers over 70 percent of the country, mostly on steep slopes. The longest river is the Amnok (Yalu) River which flows for . North Korea experiences a combination of continental climate and an oceanic climate, but most of the country experiences a humid continental climate within the Köppen climate classification scheme. Winters bring clear weather interspersed with snow storms as a result of northern and northwestern winds that blow from Siberia. Summer tends to be by far the hottest, most humid, and rainiest time of year because of the southern and southeastern monsoon winds that carry moist air from the Pacific Ocean. Approximately 60 percent of all precipitation occurs from June to September. Spring and autumn are transitional seasons between summer and winter. The daily average high and low temperatures for Pyongyang are in January and in August. North Korea functions as a highly centralized, one-party state. According to its 2016 constitution, it is a self-described revolutionary and socialist state "guided in its activities by the Juche idea and the Songun idea". In addition to the constitution, North Korea is governed by the Ten Principles for the Establishment of a Monolithic Ideological System (also known as the "Ten Principles of the One-Ideology System") which establishes standards for governance and a guide for the behaviors of North Koreans. The Workers' Party of Korea (WPK), led by a member of the Kim dynasty, has an estimated 3,000,000 members and dominates every aspect of North Korean politics. It has two satellite organizations, the Korean Social Democratic Party and the Chondoist Chongu Party which participate in the WPK-led Democratic Front for the Reunification of the Fatherland of which all political officers are required to be members. Kim Jong-un of the Kim dynasty is the current Supreme Leader or "Suryeong" of North Korea. He heads all major governing structures: he is Chairman of the Workers' Party of Korea, Chairman of the State Affairs Commission of North Korea, and Supreme Commander of the Korean People's Army. His grandfather Kim Il-sung, the founder and leader of North Korea until his death in 1994, is the country's "eternal President", while his father Kim Jong-il who succeeded Kim Il-sung as the leader was announced "Eternal General Secretary" and "Eternal Chairman of the National Defence Commission" after his death in 2011. According to the Constitution of North Korea, there are officially three main branches of government. The first of these is the State Affairs Commission of North Korea, which acts as "the supreme national guidance organ of state sovereignty". Its role is to deliberate and decide the work on defense building of the State, including major policies of the State; and to carry out the directions of the Chairman of the commission, Kim Jong-Un. Legislative power is held by the unicameral Supreme People's Assembly (SPA). Its 687 members are elected every five years by universal suffrage, though they have been described by outside observers as sham elections. Supreme People's Assembly sessions are convened by the SPA Presidium, whose president (Choe Ryong-hae since 2019) represents the state in relations with foreign countries. Deputies formally elect the President, the vice-presidents and members of the Presidium and take part in the constitutionally appointed activities of the legislature: pass laws, establish domestic and foreign policies, appoint members of the cabinet, review and approve the state economic plan, among others. The SPA itself cannot initiate any legislation independently of party or state organs. It is unknown whether it has ever criticized or amended bills placed before it, and the elections are based around a single list of WPK-approved candidates who stand without opposition. Executive power is vested in the Cabinet of North Korea, which has been headed by Premier Kim Jae-Ryong since 11 April 2019. The Premier represents the government and functions independently. His authority extends over two vice-premiers, 30 ministers, two cabinet commission chairmen, the cabinet chief secretary, the president of the Central Bank, the director of the Central Bureau of Statistics and the president of the Academy of Sciences. A 31st ministry, the Ministry of People's Armed Forces, is under the jurisdiction of the State Affairs Commission. North Korea, like its southern counterpart, claims to be the legitimate government of the entire Korean peninsula and adjacent islands. Despite its official title as the "Democratic People's Republic of Korea", some observers have described North Korea's political system as an absolute monarchy or a "hereditary dictatorship". It has also been described as a Stalinist dictatorship. The "Juche" ideology is the cornerstone of party works and government operations. It is viewed by the official North Korean line as an embodiment of Kim Il-sung's wisdom, an expression of his leadership, and an idea which provides "a complete answer to any question that arises in the struggle for national liberation". "Juche" was pronounced in December 1955 in order to emphasize a Korea-centered revolution. Its core tenets are economic self-sufficiency, military self-reliance and an independent foreign policy. The roots of "Juche" were made up of a complex mixture of factors, including the cult of personality centered on Kim Il-sung, the conflict with pro-Soviet and pro-Chinese dissenters, and Korea's centuries-long struggle for independence. "Juche" was introduced into the constitution in 1972. "Juche" was initially promoted as a "creative application" of Marxism–Leninism, but in the mid-1970s, it was described by state propaganda as "the only scientific thought... and most effective revolutionary theoretical structure that leads to the future of communist society". "Juche" eventually replaced Marxism–Leninism entirely by the 1980s, and in 1992 references to the latter were omitted from the constitution. The 2009 constitution dropped references to communism and elevated the "Songun" military-first policy while explicitly confirming the position of Kim Jong-il. However, the constitution retains references to socialism. "Juche"s concepts of self-reliance have evolved with time and circumstances, but still provide the groundwork for the spartan austerity, sacrifice and discipline demanded by the party. Scholar Brian Reynolds Myers views North Korea's actual ideology as a Korean ethnic nationalism similar to statism in Shōwa Japan and European fascism. North Korea is ruled by the Kim dynasty, which in North Korea is referred to as the "Mount Paektu Bloodline". It is a three-generation lineage descending from the country's first leader, Kim Il-sung, since 1948. Kim developed a cult of personality closely tied to the state philosophy of "Juche", which was later passed on to his successors: his son Kim Jong-il and grandson Kim Jong-un. In 2013, this lineage was made explicit when Clause 2 of Article 10 of the new edited Ten Fundamental Principles of the Korean Workers' Party stated that the party and revolution must be carried "eternally" by the "Baekdu bloodline". In order to solidify "Mount Paektu Bloodline", Kim Il-sung and Kim Jong-il have recalled all the family genealogy books under the pretext that familyism and regionalism are the hotbeds of the revolution. In 1958, North Korea declared its ideology to be socialism and took away all of people's private property and dismantled family groups that had been living in the center of genealogy and ancestors. They later moved the entire population from the northern 38th parallel. Hence, in North Korea there is no bon-gwan in people's names. According to "New Focus International", the cult of personality, particularly surrounding Kim Il-sung, has been crucial for legitimizing the family's hereditary succession. The control the North Korean government exercises over many aspects of the nation's culture is used to perpetuate the cult of personality surrounding Kim Il-sung, and Kim Jong-il. While visiting North Korea in 1979, journalist Bradley Martin wrote that nearly all music, art, and sculpture that he observed glorified "Great Leader" Kim Il-sung, whose personality cult was then being extended to his son, "Dear Leader" Kim Jong-il. Claims that the dynasty has been deified are contested by North Korea researcher B. R. Myers: "Divine powers have never been attributed to either of the two Kims. In fact, the propaganda apparatus in Pyongyang has generally been careful "not" to make claims that run directly counter to citizens' experience or common sense." He further explains that the state propaganda painted Kim Jong-il as someone whose expertise lay in military matters and that the famine of the 1990s was partially caused by natural disasters out of Kim Jong-il's control. The song "No Motherland Without You", sung by the North Korean army choir, was created especially for Kim Jong-il and is one of the most popular tunes in the country. Kim Il-sung is still officially revered as the nation's "Eternal President". Several landmarks in North Korea are named for Kim Il-sung, including Kim Il-sung University, Kim Il-sung Stadium, and Kim Il-sung Square. Defectors have been quoted as saying that North Korean schools deify both father and son. Kim Il-sung rejected the notion that he had created a cult around himself, and accused those who suggested this of "factionalism". Following the death of Kim Il-sung, North Koreans were prostrating and weeping to a bronze statue of him in an organized event; similar scenes were broadcast by state television following the death of Kim Jong-il. Critics maintain that Kim Jong-il's personality cult was inherited from his father. Kim Jong-il was often the center of attention throughout ordinary life. His birthday is one of the most important public holidays in the country. On his 60th birthday (based on his official date of birth), mass celebrations occurred throughout the country. Kim Jong-il's personality cult, although significant, was not as extensive as his father's. One point of view is that Kim Jong-il's cult of personality was solely out of respect for Kim Il-sung or out of fear of punishment for failure to pay homage, while North Korean government sources consider it genuine hero worship. The extent of the cult of personality surrounding Kim Jong-il and Kim Il-sung was illustrated on 11 June 2012 when a 14-year-old North Korean schoolgirl drowned while attempting to rescue portraits of the two from a flood. As a result of its isolation, North Korea is sometimes known as the "hermit kingdom", a term that originally referred to the isolationism in the latter part of the Joseon Dynasty. Initially, North Korea had diplomatic ties only with other communist countries, and even today, most of the foreign embassies accredited to North Korea are located in Beijing rather than in Pyongyang. In the 1960s and 1970s, it pursued an independent foreign policy, established relations with many developing countries, and joined the Non-Aligned Movement. In the late 1980s and the 1990s its foreign policy was thrown into turmoil with the collapse of the Soviet bloc. Suffering an economic crisis, it closed a number of its embassies. At the same time, North Korea sought to build relations with developed free market countries. North Korea joined the United Nations in 1991 together with South Korea. North Korea is also a member of the Non-Aligned Movement, G77 and the ASEAN Regional Forum. North Korea enjoys a close relationship with China which is often called North Korea's closest ally. The relations were strained in the last few years because of China's concerns about North Korea's nuclear program. However, the relations have started to improve again and been increasingly close especially after Xi Jinping, General Secretary of the Communist Party of China visited North Korea in April 2019. , North Korea had diplomatic relations with 166 countries and embassies in 47 countries. However, owing to the human rights and political situation, North Korea does not have diplomatic relations with Argentina, Botswana, Estonia, France, Iraq, Israel, Japan, Taiwan, and the United States. As of September 2017, France and Estonia are the last two European countries that do not have an official relationship with North Korea. North Korea continues to have strong ties with its socialist southeast Asian allies in Vietnam and Laos, as well as with Cambodia. North Korea was previously designated a state sponsor of terrorism because of its alleged involvement in the 1983 Rangoon bombing and the 1987 bombing of a South Korean airliner. On 11 October 2008, the United States removed North Korea from its list of states that sponsor terrorism after Pyongyang agreed to cooperate on issues related to its nuclear program. North Korea was re-designated a state sponsor of terrorism by the U.S. under the Trump administration on 20 November 2017. The kidnapping of at least 13 Japanese citizens by North Korean agents in the 1970s and the 1980s has affected North Korea's relationship with Japan. US President Donald Trump met with Kim in Singapore on 12 June 2018. An agreement was signed between the two countries endorsing the 2017 Panmunjom Declaration signed by North and South Korea, pledging to work towards denuclearizing the Korean Peninsula. They met in Hanoi from 27 to 28 February 2019, but failed to achieve an agreement. On 30 June 2019, Trump met with Kim along with Moon Jae-in at the Korean DMZ. The Korean Demilitarized Zone with South Korea remains the most heavily fortified border in the world. Inter-Korean relations are at the core of North Korean diplomacy and have seen numerous shifts in the last few decades. North Korea's policy is to seek reunification without what it sees as outside interference, through a federal structure retaining each side's leadership and systems. In 1972, the two Koreas agreed in principle to achieve reunification through peaceful means and without foreign interference. On 10 October 1980, then North Korean president Kim Il-sung proposed a federation between North and South Korea named the Democratic Federal Republic of Korea in which the respective political systems would initially remain. However, relations remained cool well until the early 1990s, with a brief period in the early 1980s when North Korea offered to provide flood relief to its southern neighbor. Although the offer was initially welcomed, talks over how to deliver the relief goods broke down and none of the promised aid ever crossed the border. The two countries also organized a reunion of 92 separated families. The Sunshine Policy instituted by South Korean president Kim Dae-jung in 1998 was a watershed in inter-Korean relations. It encouraged other countries to engage with the North, which allowed Pyongyang to normalize relations with a number of European Union states and contributed to the establishment of joint North-South economic projects. The culmination of the Sunshine Policy was the 2000 Inter-Korean summit, when Kim Dae-jung visited Kim Jong-il in Pyongyang. Both North and South Korea signed the June 15th North–South Joint Declaration, in which both sides promised to seek peaceful reunification. On 4 October 2007, South Korean president Roh Moo-hyun and Kim Jong-il signed an eight-point peace agreement. However, relations worsened when South Korean president Lee Myung-bak adopted a more hard-line approach and suspended aid deliveries pending the de-nuclearization of the North. In 2009, North Korea responded by ending all of its previous agreements with the South. It deployed additional ballistic missiles and placed its military on full combat alert after South Korea, Japan and the United States threatened to intercept a Unha-2 space launch vehicle. The next few years witnessed a string of hostilities, including the alleged North Korean involvement in the sinking of South Korean warship "Cheonan", mutual ending of diplomatic ties, a North Korean artillery attack on Yeonpyeong Island, and growing international concern over North Korea's nuclear program. In May 2017, Moon Jae-in was elected President of South Korea with a promise to return to the Sunshine Policy. In February 2018, a détente developed at the Winter Olympics held in South Korea. In April, South Korean President Moon Jae-in and Kim Jong-un met at the DMZ, and, in the Panmunjom Declaration, pledged to work for peace and nuclear disarmament. In September, at a joint news conference in Pyongyang, Moon and Kim agreed upon turning the Korean Peninsula into a "land of peace without nuclear weapons and nuclear threats". North Korea is widely accused of having perhaps the worst human rights record in the world. A 2014 UN inquiry into human rights in North Korea concluded that, "The gravity, scale and nature of these violations reveal a state that does not have any parallel in the contemporary world". North Koreans have been referred to as "some of the world's most brutalized people" by Human Rights Watch, because of the severe restrictions placed on their political and economic freedoms. The North Korean population is strictly managed by the state and all aspects of daily life are subordinated to party and state planning. Employment is managed by the party on the basis of political reliability, and travel is tightly controlled by the Ministry of People's Security. Amnesty International reports of severe restrictions on the freedom of association, expression and movement, arbitrary detention, torture and other ill-treatment resulting in death, and executions. The State Security Department extrajudicially apprehends and imprisons those accused of political crimes without due process. People perceived as hostile to the government, such as Christians or critics of the leadership, are deported to labor camps without trial, often with their whole family and mostly without any chance of being released. Based on satellite images and defector testimonies, Amnesty International estimates that around 200,000 prisoners are held in six large political prison camps, where they are forced to work in conditions approaching slavery. Supporters of the government who deviate from the government line are subject to reeducation in sections of labor camps set aside for that purpose. Those who are deemed politically rehabilitated may reassume responsible government positions on their release. North Korean defectors have provided detailed testimonies on the existence of the total control zones where abuses such as torture, starvation, rape, murder, medical experimentation, forced labor, and forced abortions have been reported. On the basis of these abuses, as well as persecution on political, religious, racial and gender grounds, forcible transfer of populations, enforced disappearance of persons and forced starvation, the United Nations Commission of Inquiry has accused North Korea of crimes against humanity. The International Coalition to Stop Crimes Against Humanity in North Korea (ICNK) estimates that over 10,000 people die in North Korean prison camps every year. According to Human Rights Watch, which cites interviews with defectors, North Korean women are routinely subjected to sexual violence, unwanted sexual contact, and rape. Men in positions of power, including police, high-ranking officials, market supervisors, and guards can abuse women at will and are not prosecuted for it. It happens so often that it is accepted as a routine part of life. Women assume they can't do anything about it. The only ones with protection are those whose husbands or fathers are themselves in positions of power. The North Korean government rejects the human rights abuse claims, calling them "a smear campaign" and a "human rights racket" aimed at government change. In a 2014 report to the UN, North Korea dismissed accusations of atrocities as "wild rumors". The official state media, KCNA, responded with an article that included homophobic insults against the author of the human rights report, Michael Kirby, calling him "a disgusting old lecher with a 40-odd-year-long career of homosexuality ... This practice can never be found in the DPRK boasting of the sound mentality and good morals ... In fact, it is ridiculous for such gay to sponsor dealing with others' human rights issue." The government, however, admitted some human rights issues related to living conditions and stated that it is working to improve them. According to Amnesty International, citizens in North Korea are denied freedom of movement including the right to leave the country at will and its government denies access to international human rights observers. North Korea has a civil law system based on the Prussian model and influenced by Japanese traditions and communist legal theory. Judiciary procedures are handled by the Supreme Court (the highest court of appeal), provincial or special city-level courts, people's courts and special courts. People's courts are at the lowest level of the system and operate in cities, counties and urban districts, while different kinds of special courts handle cases related to military, railroad or maritime matters. Judges are theoretically elected by their respective local people's assemblies, but in practice they are appointed by the Workers' Party of Korea. The penal code is based on the principle of "nullum crimen sine lege" (no crime without a law), but remains a tool for political control despite several amendments reducing ideological influence. Courts carry out legal procedures related to not only criminal and civil matters, but also political cases as well. Political prisoners are sent to labor camps, while criminal offenders are incarcerated in a separate system. The Ministry of People's Security (MPS) maintains most law enforcement activities. It is one of the most powerful state institutions in North Korea and oversees the national police force, investigates criminal cases and manages non-political correctional facilities. It handles other aspects of domestic security like civil registration, traffic control, fire departments and railroad security. The State Security Department was separated from the MPS in 1973 to conduct domestic and foreign intelligence, counterintelligence and manage the political prison system. Political camps can be short-term reeducation zones or "kwalliso" (total control zones) for lifetime detention. Camp 15 in Yodok and Camp 18 in Bukchang have been described in detailed testimonies. The security apparatus is very extensive, exerting strict control over residence, travel, employment, clothing, food and family life. Security forces employ mass surveillance. It is believed they tightly monitor cellular and digital communications. The Korean People's Army (KPA) has 1,106,000 active and 8,389,000 reserve and paramilitary troops, making it the largest military institution in the world. With an active duty army of 1.21 million, consisting of of its population, the KPA is the fourth largest military force in the world after China, the United States and India. About 20 percent of men aged 17–54 serve in the regular armed forces, and approximately one in every 25 citizens is an enlisted soldier. The KPA has five branches: Ground Force, Navy, Air Force, Special Operations Force, and Rocket Force. Command of the Korean People's Army lies in both the Central Military Commission of the Workers' Party of Korea and the independent State Affairs Commission. The Ministry of People's Armed Forces is subordinated to the latter. Of all KPA branches, the Ground Force is the largest. It has approximately one million personnel divided into 80 infantry divisions, 30 artillery brigades, 25 special warfare brigades, 20 mechanized brigades, 10 tank brigades and seven tank regiments. They are equipped with 3,700 tanks, 2,100 armored personnel carriers and infantry fighting vehicles, 17,900 artillery pieces, 11,000 anti-aircraft guns and some 10,000 MANPADS and anti-tank guided missiles. Other equipment includes 1,600 aircraft in the Air Force and 1,000 vessels in the Navy. North Korea has the largest special forces and the largest submarine fleet in the world. North Korea possesses nuclear weapons, but the strength of its arsenal is uncertain. In January 2018, estimates of North Korea's nuclear arsenal ranged between 15 and 60 bombs, probably including hydrogen bombs. Delivery capabilities are provided by the Rocket Force, which has some 1,000 ballistic missiles with a range of up to . According to a 2004 South Korean assessment, North Korea possesses a stockpile of chemical weapons estimated to amount to 2,500–5,000 tons, including nerve, blister, blood, and vomiting agents, as well as the ability to cultivate and produce biological weapons including anthrax, smallpox, and cholera. Because of its nuclear and missile tests, North Korea has been sanctioned under United Nations Security Council resolutions 1695 of July 2006, 1718 of October 2006, 1874 of June 2009, 2087 of January 2013, and 2397 in December 2017. The military faces some issues limiting its conventional capabilities, including obsolete equipment, insufficient fuel supplies and a shortage of digital command and control assets due to other countries being banned from selling weapons to it by the UN sanctions. To compensate for these deficiencies, the KPA has deployed a wide range of asymmetric warfare technologies like anti-personnel blinding lasers, GPS jammers, midget submarines and human torpedoes, stealth paint, and cyberwarfare units. In 2015, North Korea was estimated as having 6,000 sophisticated computer security personnel. KPA units have allegedly attempted to jam South Korean military satellites. Much of the equipment is engineered and produced by a domestic defense industry. Weapons are manufactured in roughly 1,800 underground defense industry plants scattered throughout the country, most of them located in Chagang Province. The defense industry is capable of producing a full range of individual and crew-served weapons, artillery, armored vehicles, tanks, missiles, helicopters, surface combatants, submarines, landing and infiltration craft, Yak-18 trainers and possibly co-production of jet aircraft. According to official North Korean media, military expenditures for 2010 amount to 15.8 percent of the state budget. The U.S. State Department has estimated that North Korea's military spending averaged 23% of its GDP from 2004 to 2014, the highest level in the world. With the exception of a small Chinese community and a few ethnic Japanese, North Korea's people are ethnically homogeneous. Demographic experts in the 20th century estimated that the population would grow to 25.5 million by 2000 and 28 million by 2010, but this increase never occurred due to the North Korean famine. It began in 1995, lasted for three years and resulted in the deaths of between 240,000 and 420,000 North Koreans. International donors led by the United States initiated shipments of food through the World Food Program in 1997 to combat the famine. Despite a drastic reduction of aid under the George W. Bush administration, the situation gradually improved: the number of malnourished children declined from 60% in 1998 to 37% in 2006 and 28% in 2013. Domestic food production almost recovered to the recommended annual level of 5.37 million tons of cereal equivalent in 2013, but the World Food Program reported a continuing lack of dietary diversity and access to fats and proteins. The famine had a significant impact on the population growth rate, which declined to 0.9% annually in 2002. It was 0.5% in 2014. Late marriages after military service, limited housing space and long hours of work or political studies further exhaust the population and reduce growth. The national birth rate is 14.5 births per year per 1,000 population. Two-thirds of households consist of extended families mostly living in two-room units. Marriage is virtually universal and divorce is extremely rare. North Korea had a life expectancy of 69.8 years in 2013. While North Korea is classified as a low-income country, the structure of North Korea's causes of death (2013) is unlike that of other low-income countries. Instead, it is closer to worldwide averages, with non-communicable diseases—such as cardiovascular disease and cancers—accounting for two-thirds of the total deaths. A 2013 study reported that communicable diseases and malnutrition are responsible for 29% of the total deaths in North Korea. This figure is higher than those of high-income countries and South Korea, but half of the average 57% of all deaths in other low-income countries. In 2003, infectious diseases like tuberculosis, malaria, and hepatitis B were described as endemic to the country as a result of the famine. However, in 2013, they were reported to be in decline. In 2013, cardiovascular disease as a single disease group was reported as the largest cause of death in North Korea. The three major causes of death in North Korea are ischaemic heart disease (13%), lower respiratory infections (11%) and cerebrovascular disease (7%). Non-communicable diseases risk factors in North Korea include high rates of urbanization, an aging society, and high rates of smoking and alcohol consumption amongst men. According to a 2003 report by the United States Department of State, almost 100% of the population has access to water and sanitation. 80% of the population had access to improved sanitation facilities in 2015. A free universal insurance system is in place. Quality of medical care varies significantly by region and is often low, with severe shortages of equipment, drugs and anesthetics. According to WHO, expenditure on health per capita is one of the lowest in the world. Preventive medicine is emphasized through physical exercise and sports, nationwide monthly checkups and routine spraying of public places against disease. Every individual has a lifetime health card which contains a full medical record. The 2008 census listed the entire population as literate. An 11-year free, compulsory cycle of primary and secondary education is provided in more than 27,000 nursery schools, 14,000 kindergartens, 4,800 four-year primary and 4,700 six-year secondary schools. 77% of males and 79% of females aged 30–34 have finished secondary school. An additional 300 universities and colleges offer higher education. Most graduates from the compulsory program do not attend university but begin their obligatory military service or proceed to work in farms or factories instead. The main deficiencies of higher education are the heavy presence of ideological subjects, which comprise 50% of courses in social studies and 20% in sciences, and the imbalances in curriculum. The study of natural sciences is greatly emphasized while social sciences are neglected. Heuristics is actively applied to develop the independence and creativity of students throughout the system. The study of Russian and English was made compulsory in upper middle schools in 1978. North Korea shares the Korean language with South Korea, although some dialectal differences exist within both Koreas. North Koreans refer to their Pyongyang dialect as "munhwaŏ" ("cultured language") as opposed to the dialects of South Korea, especially the Seoul dialect or "p'yojun'ŏ" ("standard language"), which are viewed as decadent because of its use of loanwords from Chinese and European languages (particularly English). Words of Chinese, Manchu or Western origin have been eliminated from "munhwa" along with the usage of Chinese hancha characters. Written language uses only the chosŏn'gŭl (Hangul) phonetic alphabet, developed under Sejong the Great (1418–1450). Officially, North Korea is an atheist state. There are no known official statistics of religions in North Korea. According to Religious Intelligence, 64% of the population are irreligious, 16% practice Korean shamanism, 14% practice Chondoism, 4% are Buddhist, and 2% are Christian. Freedom of religion and the right to religious ceremonies are constitutionally guaranteed, but religions are restricted by the government. Amnesty International has expressed concerns about religious persecution in North Korea. The influence of Buddhism and Confucianism still has an effect on cultural life. Chondoism ("Heavenly Way") is an indigenous syncretic belief combining elements of Korean shamanism, Buddhism, Taoism and Catholicism that is officially represented by the WPK-controlled Chondoist Chongu Party. The Open Doors mission, a Protestant-group based in the United States and founded during the Cold War-era, claims the most severe persecution of Christians in the world occurs in North Korea. Four state-sanctioned churches exist, but critics claim these are showcases for foreigners. According to North Korean documents and refugee testimonies, all North Koreans are sorted into groups according to their Songbun, an ascribed status system based on a citizen's assessed loyalty to the government. Based on their own behavior and the political, social, and economic background of their family for three generations as well as behavior by relatives within that range, Songbun is allegedly used to determine whether an individual is trusted with responsibility, given opportunities, or even receives adequate food. Songbun allegedly affects access to educational and employment opportunities and particularly whether a person is eligible to join North Korea's ruling party. There are 3 main classifications and about 50 sub-classifications. According to Kim Il-sung, speaking in 1958, the loyal "core class" constituted 25% of the North Korean population, the "wavering class" 55%, and the "hostile class" 20%. The highest status is accorded to individuals descended from those who participated with Kim Il-sung in the resistance against Japanese occupation during and before World War II and to those who were factory workers, laborers, or peasants in 1950. While some analysts believe private commerce recently changed the Songbun system to some extent, most North Korean refugees say it remains a commanding presence in everyday life. The North Korean government claims all citizens are equal and denies any discrimination on the basis of family background. North Korea has maintained one of the most closed and centralized economies in the world since the 1940s. For several decades, it followed the Soviet pattern of five-year plans with the ultimate goal of achieving self-sufficiency. Extensive Soviet and Chinese support allowed North Korea to rapidly recover from the Korean War and register very high growth rates. Systematic inefficiency began to arise around 1960, when the economy shifted from the extensive to the intensive development stage. The shortage of skilled labor, energy, arable land and transportation significantly impeded long-term growth and resulted in consistent failure to meet planning objectives. The major slowdown of the economy contrasted with South Korea, which surpassed the North in terms of absolute GDP and per capita income by the 1980s. North Korea declared the last seven-year plan unsuccessful in December 1993 and thereafter stopped announcing plans. The loss of Eastern Bloc trading partners and a series of natural disasters throughout the 1990s caused severe hardships, including widespread famine. By 2000, the situation improved owing to a massive international food assistance effort, but the economy continues to suffer from food shortages, dilapidated infrastructure and a critically low energy supply. In an attempt to recover from the collapse, the government began structural reforms in 1998 that formally legalized private ownership of assets and decentralized control over production. A second round of reforms in 2002 led to an expansion of market activities, partial monetization, flexible prices and salaries, and the introduction of incentives and accountability techniques. Despite these changes, North Korea remains a command economy where the state owns almost all means of production and development priorities are defined by the government. North Korea has the structural profile of a relatively industrialized country where nearly half of the Gross Domestic Product is generated by industry and human development is at medium levels. Purchasing power parity (PPP) GDP is estimated at $40 billion, with a very low per capita value of $1,800. In 2012, Gross national income per capita was $1,523, compared to $28,430 in South Korea. The North Korean won is the national currency, issued by the Central Bank of the Democratic People's Republic of Korea. The economy is heavily nationalized. Food and housing are extensively subsidized by the state; education and healthcare are free; and the payment of taxes was officially abolished in 1974. A variety of goods are available in department stores and supermarkets in Pyongyang, though most of the population relies on small-scale "jangmadang" markets. In 2009, the government attempted to stem the expanding free market by banning jangmadang and the use of foreign currency, heavily devaluing the won and restricting the convertibility of savings in the old currency, but the resulting inflation spike and rare public protests caused a reversal of these policies. Private trade is dominated by women because most men are required to be present at their workplace, even though many state-owned enterprises are non-operational. Industry and services employ 65% of North Korea's 12.6 million labor force. Major industries include machine building, military equipment, chemicals, mining, metallurgy, textiles, food processing and tourism. Iron ore and coal production are among the few sectors where North Korea performs significantly better than its southern neighbor—it produces about 10 times larger amounts of each resource. Using ex-Romanian drilling rigs, several oil exploration companies have confirmed significant oil reserves in the North Korean shelf of the Sea of Japan, and in areas south of Pyongyang. The agricultural sector was shattered by the natural disasters of the 1990s. Its 3,500 cooperatives and state farms were among the most productive and successful in the world around 1980 but now experience chronic fertilizer and equipment shortages. Rice, corn, soybeans and potatoes are some of the primary crops. A significant contribution to the food supply comes from commercial fishing and aquaculture. Tourism has been a growing sector for the past decade. North Korea has been aiming to increase the number of foreign visitors through projects like the Masikryong Ski Resort. Foreign trade surpassed pre-crisis levels in 2005 and continues to expand. North Korea has a number of special economic zones (SEZs) and Special Administrative Regions where foreign companies can operate with tax and tariff incentives while North Korean establishments gain access to improved technology. Initially four such zones existed, but they yielded little overall success. The SEZ system was overhauled in 2013 when 14 new zones were opened and the Rason Special Economic Zone was reformed as a joint Chinese-North Korean project. The Kaesong Industrial Region is a special economic zone where more than 100 South Korean companies employ some 52,000 North Korean workers. , China is the biggest trading partner of North Korea outside inter-Korean trade, accounting for more than 84% of the total external trade ($5.3 billion) followed by India at 3.3% share ($205 million). In 2014, Russia wrote off 90% of North Korea's debt and the two countries agreed to conduct all transactions in rubles. Overall, external trade in 2013 reached a total of $7.3 billion (the highest amount since 1990), while inter-Korean trade dropped to an eight-year low of $1.1 billion. North Korea's energy infrastructure is obsolete and in disrepair. Power shortages are chronic and would not be alleviated even by electricity imports because the poorly maintained grid causes significant losses during transmission. Coal accounts for 70% of primary energy production, followed by hydroelectric power with 17%. The government under Kim Jong-un has increased emphasis on renewable energy projects like wind farms, solar parks, solar heating and biomass. A set of legal regulations adopted in 2014 stressed the development of geothermal, wind and solar energy along with recycling and environmental conservation. North Korea's long-term objective is to curb fossil fuel usage and reach an output of 5 million kilowatts from renewable sources by 2044, up from its current total of 430,000 kilowatts from all sources. Wind power is projected to satisfy 15% of the country's total energy demand under this strategy. North Korea also strives to develop its own civilian nuclear program. These efforts are under much international dispute due to their military applications and concerns about safety. Transport infrastructure includes railways, highways, water and air routes, but rail transport is by far the most widespread. North Korea has some 5,200 kilometers of railways mostly in standard gauge which carry 80% of annual passenger traffic and 86% of freight, but electricity shortages undermine their efficiency. Construction of a high-speed railway connecting Kaesong, Pyongyang and Sinuiju with speeds exceeding 200 km/h was approved in 2013. North Korea connects with the Trans-Siberian Railway through Rajin. Road transport is very limited—only 724 kilometers of the 25,554 kilometer road network are paved, and maintenance on most roads is poor. Only 2% of the freight capacity is supported by river and sea transport, and air traffic is negligible. All port facilities are ice-free and host a merchant fleet of 158 vessels. Eighty-two airports and 23 helipads are operational and the largest serve the state-run airline, Air Koryo. Cars are relatively rare, but bicycles are common. R&D efforts are concentrated at the State Academy of Sciences, which runs 40 research institutes, 200 smaller research centers, a scientific equipment factory and six publishing houses. The government considers science and technology to be directly linked to economic development. A five-year scientific plan emphasizing IT, biotechnology, nanotechnology, marine and plasma research was carried out in the early 2000s. A 2010 report by the South Korean Science and Technology Policy Institute identified polymer chemistry, single carbon materials, nanoscience, mathematics, software, nuclear technology and rocketry as potential areas of inter-Korean scientific cooperation. North Korean institutes are strong in these fields of research, although their engineers require additional training and laboratories need equipment upgrades. Under its "constructing a powerful knowledge economy" slogan, the state has launched a project to concentrate education, scientific research and production into a number of "high-tech development zones". International sanctions remain a significant obstacle to their development. The "Miraewon" network of electronic libraries was established in 2014 under similar slogans. Significant resources have been allocated to the national space program, which is managed by the National Aerospace Development Administration (formerly managed by the Korean Committee of Space Technology until April 2013) Domestically produced launch vehicles and the Kwangmyŏngsŏng satellite class are launched from two spaceports, the Tonghae Satellite Launching Ground and the Sohae Satellite Launching Station. After four failed attempts, North Korea became the tenth spacefaring nation with the launch of Kwangmyŏngsŏng-3 Unit 2 in December 2012, which successfully reached orbit but was believed to be crippled and non-operational. It joined the Outer Space Treaty in 2009 and has stated its intentions to undertake manned and Moon missions. The government insists the space program is for peaceful purposes, but the United States, Japan, South Korea and other countries maintain that it serves to advance military ballistic missile programs. On 7 February 2016, North Korea successfully launched a long-range rocket, supposedly to place a satellite into orbit. Critics believe that the real purpose of the launch was to test a ballistic missile. The launch was strongly condemned by the UN Security Council. A statement broadcast on Korean Central Television said that a new Earth observation satellite, Kwangmyongsong-4, had successfully been put into orbit less than 10 minutes after lift-off from the Sohae space center in North Phyongan province. Usage of communication technology is controlled by the Ministry of Post and Telecommunications. An adequate nationwide fiber-optic telephone system with 1.18 million fixed lines and expanding mobile coverage is in place. Most phones are installed for senior government officials and installation requires written explanation why the user needs a telephone and how it will be paid for. Cellular coverage is available with a 3G network operated by Koryolink, a joint venture with Orascom Telecom Holding. The number of subscribers has increased from 3,000 in 2002 to almost two million in 2013. International calls through either fixed or cellular service are restricted, and mobile Internet is not available. Internet access itself is limited to a handful of elite users and scientists. Instead, North Korea has a walled garden intranet system called Kwangmyong, which is maintained and monitored by the Korea Computer Center. Its content is limited to state media, chat services, message boards, an e-mail service and an estimated 1,000–5,500 websites. Computers employ the Red Star OS, an operating system derived from Linux, with a user shell visually similar to that of OS X. On 19 September 2016, a TLDR project noticed the North Korean Internet DNS data and top-level domain was left open which allowed global DNS zone transfers. A dump of the data discovered was shared on GitHub. Despite a historically strong Chinese influence, Korean culture has shaped its own unique identity. It came under attack during the Japanese rule from 1910 to 1945, when Japan enforced a cultural assimilation policy. Koreans were forced to learn and speak Japanese, adopt the Japanese family name system and Shinto religion, and were forbidden to write or speak the Korean language in schools, businesses, or public places. After the peninsula was divided in 1945, two distinct cultures formed out of the common Korean heritage. North Koreans have little exposure to foreign influence. The revolutionary struggle and the brilliance of the leadership are some of the main themes in art. "Reactionary" elements from traditional culture have been discarded and cultural forms with a "folk" spirit have been reintroduced. Korean heritage is protected and maintained by the state. Over 190 historical sites and objects of national significance are cataloged as National Treasures of North Korea, while some 1,800 less valuable artifacts are included in a list of Cultural Assets. The Historic Sites and Monuments in Kaesong and the Complex of Goguryeo Tombs are UNESCO World Heritage Sites. Visual arts are generally produced in the esthetics of Socialist realism. North Korean painting combines the influence of Soviet and Japanese visual expression to instill a sentimental loyalty to the system. All artists in North Korea are required to join the Artists' Union, and the best among them can receive an official license to portray the leaders. Portraits and sculptures depicting Kim Il-sung, Kim Jong-il and Kim Jong-un are classed as "Number One works". Most aspects of art have been dominated by Mansudae Art Studio since its establishment in 1959. It employs around 1,000 artists in what is likely the biggest art factory in the world where paintings, murals, posters and monuments are designed and produced. The studio has commercialized its activity and sells its works to collectors in a variety of countries including China, where it is in high demand. Mansudae Overseas Projects is a subdivision of Mansudae Art Studio that carries out construction of large-scale monuments for international customers. Some of the projects include the African Renaissance Monument in Senegal, and the Heroes' Acre in Namibia. In the Democratic People's Republic of Korea, the Goguryeo tumulus is registered on the World Heritage list of UNESCO. These remains were registered as the first World Heritage property of North Korea in the UNESCO World Heritage Committee (WHC) in July 2004. There are 63 burial mounds in the tomb group, with clear murals preserved. The burial customs of the Goguryeo culture have influenced Asian civilizations beyond Korea, including Japan. The government emphasized optimistic folk-based tunes and revolutionary music throughout most of the 20th century. Ideological messages are conveyed through massive orchestral pieces like the "Five Great Revolutionary Operas" based on traditional Korean "ch'angguk". Revolutionary operas differ from their Western counterparts by adding traditional instruments to the orchestra and avoiding recitative segments. "Sea of Blood" is the most widely performed of the Five Great Operas: since its premiere in 1971, it has been played over 1,500 times, and its 2010 tour in China was a major success. Western classical music by Brahms, Tchaikovsky, Stravinsky and other composers is performed both by the State Symphony Orchestra and student orchestras. Pop music appeared in the 1980s with the Pochonbo Electronic Ensemble and Wangjaesan Light Music Band. Improved relations with South Korea following the 2000 inter-Korean summit caused a decline in direct ideological messages in pop songs, but themes like comradeship, nostalgia and the construction of a powerful country remained. In 2014, the all-girl Moranbong Band was described as the most popular group in the country. North Koreans also listen to K-pop which spreads through illegal markets. All publishing houses are owned by the government or the WPK because they are considered an important tool for propaganda and agitation. The Workers' Party of Korea Publishing House is the most authoritative among them and publishes all works of Kim Il-sung, ideological education materials and party policy documents. The availability of foreign literature is limited, examples being North Korean editions of Indian, German, Chinese and Russian fairy tales, "Tales from Shakespeare", some works of Bertolt Brecht and Erich Kästner, and the Harry Potter series. Kim Il-sung's personal works are considered "classical masterpieces" while the ones created under his instruction are labeled "models of "Juche" literature". These include "The Fate of a Self-Defense Corps Man", "The Song of Korea" and "Immortal History", a series of historical novels depicting the suffering of Koreans under Japanese occupation. More than four million literary works were published between the 1980s and the early 2000s, but almost all of them belong to a narrow variety of political genres like "army-first revolutionary literature". Science fiction is considered a secondary genre because it somewhat departs from the traditional standards of detailed descriptions and metaphors of the leader. The exotic settings of the stories give authors more freedom to depict cyberwarfare, violence, sexual abuse and crime, which are absent in other genres. Sci-fi works glorify technology and promote the Juche concept of anthropocentric existence through depictions of robotics, space exploration and immortality. Government policies towards film are no different than those applied to other arts—motion pictures serve to fulfill the targets of "social education". Some of the most influential films are based on historic events ("An Jung-geun shoots Itō Hirobumi") or folk tales ("Hong Gildong"). Most movies have predictable propaganda story lines which make cinema an unpopular entertainment; viewers only see films that feature their favorite actors. Western productions are only available at private showings to high-ranking Party members, although the 1997 film "Titanic" is frequently shown to university students as an example of Western culture. Access to foreign media products is available through smuggled DVDs and television or radio broadcasts in border areas. Western films like "The Interview", "Titanic", and "Charlie's Angels" are just a few films that have been smuggled across the borders of North Korea, allowing for access to the North Korean citizens. North Korean media are under some of the strictest government control in the world. The censorship in North Korea encompasses all the information produced by the media. Monitored heavily by government officials, the media is strictly used to reinforce ideals approved by the government. There is no freedom of press in North Korea as all the media is controlled and filtered through governmental censors. Freedom of the press in 2017 was 180th out of 180 countries in Reporters Without Borders' annual Press Freedom Index. According to Freedom House, all media outlets serve as government mouthpieces, all journalists are party members and listening to foreign broadcasts carries the threat of a death penalty. The main news provider is the Korean Central News Agency. All 12 major newspapers and 20 periodicals, including "Rodong Sinmun", are published in the capital. There are three state-owned TV stations. Two of them broadcast only on weekends and the Korean Central Television is on air every day in the evenings. Uriminzokkiri and its associated YouTube and Twitter accounts distribute imagery, news and video issued by government media. The Associated Press opened the first Western all-format, full-time bureau in Pyongyang in 2012. Media coverage of North Korea has often been inadequate as a result of the country's isolation. Stories like Kim Jong-un undergoing surgery to look like his grandfather, executing his ex-girlfriend or feeding his uncle to a pack of hungry dogs have been circulated by foreign media as truth despite the lack of a credible source. Many of the claims originate from the South Korean right-wing newspaper "The Chosun Ilbo". Max Fisher of "The Washington Post" has written that "almost any story [on North Korea] is treated as broadly credible, no matter how outlandish or thinly sourced". Occasional deliberate disinformation on the part of North Korean establishments further complicates the issue. Korean cuisine has evolved through centuries of social and political change. Originating from ancient agricultural and nomadic traditions in southern Manchuria and the Korean Peninsula, it has gone through a complex interaction of the natural environment and different cultural trends. Rice dishes and kimchi are staple Korean food. In a traditional meal, they accompany both side dishes ("panch'an") and main courses like "juk", "pulgogi" or noodles. "Soju" liquor is the best-known traditional Korean spirit. North Korea's most famous restaurant, Okryu-gwan, located in Pyongyang, is known for its "raengmyeon" cold noodles. Other dishes served there include gray mullet soup with boiled rice, beef rib soup, green bean pancake, "sinsollo" and dishes made from terrapin. Okryu-gwan sends research teams into the countryside to collect data on Korean cuisine and introduce new recipes. Some Asian cities host branches of the Pyongyang restaurant chain where waitresses perform music and dance. Most schools have daily practice in association football, basketball, table tennis, gymnastics, boxing and others. The DPR Korea League is popular inside the country and its games are often televised. The national football team, "Chollima", competed in the FIFA World Cup in 2010, when it lost all three matches against Brazil, Portugal and Ivory Coast. Its 1966 appearance was much more successful, seeing a surprise 1–0 victory over Italy and a quarter final loss to Portugal by 3–5. A national team represents the nation in international basketball competitions as well. In December 2013, former American basketball professional Dennis Rodman visited North Korea to help train the national team after he developed a friendship with Kim Jong-un. North Korea's first appearance in the Olympics came in 1964. The 1972 Olympics saw its summer games debut and five medals, including one gold. With the exception of the boycotted Los Angeles and Seoul Olympics, North Korean athletes have won medals in all summer games since then. Weightlifter Kim Un-guk broke the world record of the Men's 62 kg category at the 2012 Summer Olympics in London. Successful Olympians receive luxury apartments from the state in recognition for their achievements. The Arirang Festival has been recognized by the Guinness World Records as the biggest choreographic event in the world. Some 100,000 athletes perform rhythmic gymnastics and dances while another 40,000 participants create a vast animated screen in the background. The event is an artistic representation of the country's history and pays homage to Kim Il-sung and Kim Jong-il. Rungrado 1st of May Stadium, the largest stadium in the world with its capacity of 150,000, hosts the Festival. The Pyongyang Marathon is another notable sports event. It is an IAAF Bronze Label Race where amateur runners from around the world can participate. Between 2010 and 2019, North Korea has imported 138 purebred horses from Russia at cost of over $584,000.
https://en.wikipedia.org/wiki?curid=21255
History of North Korea The history of North Korea began at the end of World War II in 1945. The surrender of Japan led to the division of Korea at the 38th parallel, with the Soviet Union occupying the north, and the United States occupying the south. The Soviet Union and the United States failed to agree on a way to unify the country, and in 1948 they established two separate governments – the Soviet-aligned Democratic People's Republic of Korea and the Western-aligned Republic of Korea – each claiming to be the legitimate government of all of Korea. In 1950 the Korean War broke out. After much destruction, the war ended with a stalemate. The division at the 38th parallel was replaced by the Korean Demilitarized Zone. Tension between the two sides continued. Out of the rubble North Korea built an industrialized command economy. Kim Il-sung led North Korea until his death in 1994. He developed a pervasive personality cult and steered the country on an independent course in accordance with the principle of "Juche" (self-reliance). However, with natural disasters and the collapse of the Soviet Bloc in 1991, North Korea went into a severe economic crisis. Kim Il-sung's son, Kim Jong-il, succeeded him, and was in turn succeeded by his son, Kim Jong-un. Amid international alarm, North Korea developed nuclear missiles. In 2018, Kim Jong-un made a sudden peace overture towards South Korea and the United States. From 1910 to the end of World War II in 1945, Korea was under Japanese rule. Most Koreans were peasants engaged in subsistence farming. In the 1930s, Japan developed mines, hydro-electric dams, steel mills, and manufacturing plants in northern Korea and neighboring Manchuria. The Korean industrial working class expanded rapidly, and many Koreans went to work in Manchuria. As a result, 65% of Korea's heavy industry was located in the north, but, due to the rugged terrain, only 37% of its agriculture. A Korean guerrilla movement emerged in the mountainous interior and in Manchuria, harassing the Japanese imperial authorities. One of the most prominent guerrilla leaders was the Communist Kim Il-sung. Northern Korea had little exposure to modern, Western ideas. One partial exception was the penetration of religion. Since the arrival of missionaries in the late nineteenth century, the northwest of Korea, and Pyongyang in particular, had been a stronghold of Christianity. As a result, Pyongyang was called the "Jerusalem of the East". At the Tehran Conference in November 1943 and the Yalta Conference in February 1945, the Soviet Union promised to join its allies in the Pacific War within three months of victory in Europe. On August 8, 1945, after three months to the day, the Soviet Union declared war on Japan. Soviet troops advanced rapidly, and the US government became anxious that they would occupy the whole of Korea. On August 10, the US government decided to propose the 38th parallel as the dividing line between a Soviet occupation zone in the north and a US occupation zone in the south. The parallel was chosen as it would place the capital Seoul under American control. To the surprise of the Americans, the Soviet Union immediately accepted the division. The agreement was incorporated into General Order No. 1 (approved on 17 August 1945) for the surrender of Japan. The division placed sixteen million Koreans in the American zone and nine million in the Soviet zone. Soviet forces began amphibious landings in Korea by August 14 and rapidly took over the northeast, and on August 16 they landed at Wonsan. On August 24, the Red Army reached Pyongyang. US forces did not arrive in the south until September 8. During August, People's Committees sprang up across Korea, affiliated with the Committee for the Preparation of Korean Independence, which in September founded the People's Republic of Korea. When Soviet troops entered Pyongyang, they found a local People's Committee established there, led by veteran Christian nationalist Cho Man-sik. Unlike their American counterparts, the Soviet authorities recognized and worked with the People's Committees. By some accounts, Cho Man-sik was the Soviet government's first choice to lead North Korea. On September 19, Kim Il-sung and 66 other Korean Red Army officers arrived in Wonsan. They had fought the Japanese in Manchuria in the 1930s but had lived in the USSR and trained in the Red Army since 1941. On October 14, Soviet authorities introduced Kim to the North Korean public as a guerrilla hero. In December 1945, at the Moscow Conference, the Soviet Union agreed to a US proposal for a trusteeship over Korea for up to five years in the lead-up to independence. Most Koreans demanded independence immediately, but Kim and the other Communists supported the trusteeship under pressure from the Soviet government. Cho Man-sik opposed the proposal at a public meeting on January 4, 1946, and disappeared into house arrest. On February 8, 1946, the People's Committees were reorganized as Interim People's Committees dominated by Communists. The new regime instituted popular policies of land redistribution, industry nationalization, labor law reform, and equality for women. Meanwhile, existing Communist groups were reconstituted as a party under Kim Il-sung's leadership. On December 18, 1945, local Communist Party committees were combined into the North Korean Communist Party. In August 1946, this party merged with the New People's Party to form the Workers' Party of North Korea. In December, a popular front led by the Workers' Party dominated elections in the North. In 1949, the Workers' Party of North Korea merged with its southern counterpart to become the Workers' Party of Korea with Kim as party chairman. Kim established the Korean People's Army (KPA) aligned with the Communists, formed from a cadre of guerrillas and former soldiers who had gained combat experience in battles against the Japanese and later Nationalist Chinese troops. From their ranks, using Soviet advisers and equipment, Kim constructed a large army skilled in infiltration tactics and guerrilla warfare. Before the outbreak of the Korean War, Joseph Stalin equipped the KPA with modern medium tanks, trucks, artillery, and small arms. Kim also formed an air force, equipped at first with ex-Soviet propeller-driven fighter and attack aircraft. Later, North Korean pilot candidates were sent to the Soviet Union and China to train in MiG-15 jet aircraft at secret bases. In 1946, a sweeping series of laws transformed North Korea on Stalinist lines. The "land to the tiller" reform redistributed the bulk of agricultural land to the poor and landless peasant population, effectively breaking the power of the landed class. This was followed by a "Labor Law", a "Sexual Equality Law", and a "Nationalisation of Industry, Transport, Communications and Banks Law". As negotiations with the Soviet Union on the future of Korea failed to make progress, the US took the issue to the United Nations in September 1947. In response, the UN established the United Nations Temporary Commission on Korea to hold elections in Korea. The Soviet Union opposed this move. In the absence of Soviet cooperation, it was decided to hold UN-supervised elections in the south only. In April 1948, a conference of organizations from the North and the South met in Pyongyang, but the conference produced no results. The southern politicians Kim Koo and Kim Kyu-sik attended the conference and boycotted the elections in the South. Both men were posthumously awarded the National Reunification Prize by North Korea. The elections were held in South Korea on May 10, 1948. On August 15, the Republic of Korea formally came into existence. A parallel process occurred in North Korea. A new Supreme People's Assembly was elected in August 1948, and on September 3 a new constitution was promulgated. The Democratic People's Republic of Korea (DPRK) was proclaimed on September 9, with Kim as Premier. On December 12, 1948, the United Nations General Assembly accepted the report of UNTCOK and declared the Republic of Korea to be the "only lawful government in Korea". By 1949, North Korea was a full-fledged Communist state. All parties and mass organizations joined the Democratic Front for the Reunification of the Fatherland, ostensibly a popular front but in reality dominated by the Communists. The government moved rapidly to establish a political system that was partly styled on the Soviet system, with political power monopolised by the Workers' Party of Korea (WPK). The consolidation of Syngman Rhee's government in the South with American military support and the suppression of the October 1948 insurrection ended North Korean hopes that a revolution in the South could reunify Korea, and from early 1949 Kim Il-sung sought Soviet and Chinese support for a military campaign to reunify the country by force. The withdrawal of most U.S. forces from South Korea in June 1949 left the southern government defended only by a weak and inexperienced South Korean army. The southern régime also had to deal with a citizenry of uncertain loyalty. The North Korean army, by contrast, had benefited from the Soviet Union's WWII-era equipment, and had a core of hardened veterans who had fought either as anti-Japanese guerrillas or alongside the Chinese Communists. In 1949 and 1950 Kim traveled to Moscow with the South Korean Communist leader Pak Hon-yong to raise support for a war of reunification. Initially Joseph Stalin rejected Kim Il-sung's requests for permission to invade the South, but in late 1949 the Communist victory in China and the development of Soviet nuclear weapons made him re-consider Kim's proposal. In January 1950, after China's Mao Zedong indicated that the People's Republic of China would send troops and other support to Kim, Stalin approved an invasion. The Soviets provided limited support in the form of advisers who helped the North Koreans as they planned the operation, and Soviet military instructors to train some of the Korean units. However, from the very beginning Stalin made it clear that the Soviet Union would avoid a direct confrontation with the U.S. over Korea and would not commit ground forces even in case of major military crisis. The stage was set for a civil war between the two rival régimes on the Korean peninsula. For over a year before the outbreak of war, the two sides had engaged in a series of bloody clashes along the 38th parallel, especially in the Ongjin area on the west coast. On June 25, 1950, claiming to be responding to a South Korean assault on Ongjin, the Northern forces launched an amphibious offensive all along the parallel. Due to a combination of surprise and military superiority, the Northern forces quickly captured the capital Seoul, forcing Syngman Rhee and his government to flee. By mid-July North Korean troops had overwhelmed the South Korean and allied American units and forced them back to a defensive line in south-east South Korea known as the Pusan Perimeter. During its brief occupation of southern Korea, the DPRK regime initiated radical social change, which included the nationalisation of industry, land reform, and the restoration of the People's Committees. According to the captured US General William F. Dean, "the civilian attitude seemed to vary between enthusiasm and passive acceptance". The United Nations condemned North Korea's actions and approved an American-led intervention force to defend South Korea. In September, UN forces landed at Inchon and retook Seoul. Under the leadership of US General Douglas MacArthur, UN forces pushed north, reaching the Chinese border. According to Bruce Cumings, the North Korean forces were not routed, but managed a strategic retreat into the mountainous interior and into neighboring Manchuria. Kim Il-sung's government re-established itself in a stronghold in Chagang Province. In late November, Chinese forces entered the war and pushed the UN forces back, retaking Pyongyang in December 1950 and Seoul in January 1951. According to American historian Bruce Cumings, the Korean People's Army played an equal part in this counterattack. UN forces managed to retake Seoul for South Korea. The war essentially became a bloody stalemate for the next two years. American bombing included the use of napalm against populated areas and the destruction of dams and dykes, which caused devastating floods. China and North Korea also alleged the US was deploying biological weapons. As a result of the bombing, almost every substantial building and much of the infrastructure in North Korea was destroyed. The North Koreans responded by building homes, schools, hospitals, and factories underground. Economic output in 1953 had fallen by 75-90% compared with 1949. While the bombing continued, armistice negotiations, which had commenced in July 1951, wore on. North Korea's lead negotiator was General Nam Il. The Korean Armistice Agreement was signed on July 27, 1953. A ceasefire followed, but there was no peace treaty, and hostilities continued at a lower intensity. Kim began gradually consolidating his power. Up to this time, North Korean politics were represented by four factions: the Yan'an faction, made up of returnees from China; the "Soviet Koreans" who were ethnic Koreans from the USSR; native Korean communists led by Pak Hon-yong; and Kim's Kapsan group who had fought guerrilla actions against Japan in the 1930s. When the Workers' Party Central Committee plenum opened on 30 August 1953, Choe Chang-ik made a speech attacking Kim for concentrating the power of the party and the state in his own hands as well as criticising the party line on industrialisation which ignored widespread starvation among the North Korean people. However, Kim neutralised the attack on him by promising to moderate the regime, promises which were never kept. The majority in the Central Committee voted to support Kim and also voted in favor of expelling Choe and Pak Hon-yong from the Central Committee. Eleven of Kim's opponents were convicted in a show trial. It is believed that all were executed. A major purge of the KWP followed, with members originating from South Korea being expelled. Pak Hon-yong, party vice chairman and Foreign Minister of the DPRK, was blamed for the failure of the southern population to support North Korea during the war, was dismissed from his positions in 1953, and was executed after a show trial in 1955. The Party Congress in 1956 indicated the transformation that the party had undergone. Most members of other factions had lost their positions of influence. More than half the delegates had joined after 1950, most were under 40 years old, and most had limited formal education. In February 1956, Soviet leader Nikita Khrushchev made a sweeping denunciation of Stalin, which sent shock waves throughout the Communist world. Encouraged by this, members of the party leadership in North Korea began to criticize Kim's dictatorial leadership, personality cult, and Stalinist economic policies. They were defeated by Kim at the August Plenum of the party. By 1960, 70 per cent of the members of the 1956 Central Committee were no longer in politics. Kim Il-sung had initially been criticized by the Soviets during a previous 1955 visit to Moscow for practicing Stalinism and a cult of personality, which was already growing enormous. The Korean ambassador to the USSR, Li Sangjo, a member of the Yan'an faction, reported that it had become a criminal offense to so much as write on Kim's picture in a newspaper and that he had been elevated to the status of Marx, Lenin, Mao, and Stalin in the communist pantheon. He also charged Kim with rewriting history to appear as if his guerrilla faction had single-handedly liberated Korea from the Japanese, completely ignoring the assistance of the Chinese People's Volunteers. In addition, Li stated that in the process of agricultural collectivization, grain was being forcibly confiscated from the peasants, leading to "at least 300 suicides" and that Kim made nearly all major policy decisions and appointments himself. Li reported that over 30,000 people were in prison for completely unjust and arbitrary reasons as trivial as not printing Kim Il-sung's portrait on sufficient quality paper or using newspapers with his picture to wrap parcels. Grain confiscation and tax collection were also conducted forcibly with violence, beatings, and imprisonment. In late 1968, known military opponents of North Korea's "Juche" (or self-reliance) ideology such as Kim Chang-bong (minister of National Security), Huh Bong-hak (chief of the Division for Southern Intelligence) and Lee Young-ho (commander in chief of the DPRK Navy) were purged as anti-party/counter-revolutionary elements, despite their credentials as anti-Japanese guerrilla fighters in the past. Kim's personality cult was modeled on Stalinism and his regime originally acknowledged Stalin as the supreme leader. After Stalin's death in 1953, however, Kim was described as the "Great Leader" or "Suryong". As his personality cult grew, the doctrine of "Juche" began to displace Marxism–Leninism. At the same time the cult extended beyond Kim himself to include his family in a revolutionary blood line. In 1972, to celebrate Kim Il-sung's birthday, the Mansu Hill Grand Monument was unveiled, including a 22-meter bronze statue of him. Like Mao in China, Kim Il-sung refused to accept Nikita Khrushchev's denunciation of Stalin and continued to model his regime on Stalinist norms. At the same time, he increasingly stressed Korean independence, as embodied in the concept of "Juche". Kim told Alexei Kosygin in 1965 that he was not anyone's puppet and "We...implement the purest Marxism and condemn as false both the Chinese admixtures and the errors of the CPSU". Relations with China had worsened during the war. Mao Zedong criticized Kim for having started the whole "idiotic war" and for being an incompetent military commander who should have been removed from power. PLA commander Peng Dehuai was equally contemptuous of Kim's skills at waging war. By some analysis, Kim Il-sung remained in power partially because the Soviets turned their attention to the Hungarian Revolution of 1956 that fall. The Soviets and Chinese were unable to stop the inevitable purge of Kim's domestic opponents or his move towards a one-man Stalinist autocracy and relations with both countries deteriorated in the former's case because of the elimination of the pro-Soviet Koreans and the latter because of the regime's refusal to acknowledge Chinese assistance in either liberation from the Japanese or the war in 1950–53. Tensions between North and South escalated in the late 1960s with a series of low-level armed clashes known as the Korean DMZ Conflict. In 1966, Kim declared "liberation of the south" to be a "national duty". In 1968, North Korean commandos launched the Blue House Raid, an unsuccessful attempt to assassinate the South Korean President Park Chung-hee. Shortly after, the US spy ship Pueblo was captured by the North Korean navy. The crew were held captive throughout the year despite American protests that the vessel was in international waters, and they were finally released in December after a formal US apology was issued. In April 1969 a North Korean fighter jet shot down an EC-121 aircraft, killing all 31 crewmen on board. The Nixon administration found itself unable to react at all, since the US was heavily committed in the Vietnam War and had no troops to spare if the situation in Korea escalated. However, the "Pueblo" capture and EC-121 shootdown did not find approval in Moscow, as the Soviet Union did not want a second major war to erupt in Asia. China's response to the USS "Pueblo" crisis is less clear. After Khrushchev was replaced by Leonid Brezhnev as Soviet Leader in 1964, and with the incentive of Soviet aid, North Korea strengthened its ties with the USSR. Kim condemned China's Cultural Revolution as "unbelievable idiocy". In turn, China's Red Guards labelled him a "fat revisionist". In 1972, the first formal summit meeting between Pyongyang and Seoul was held, but the cautious talks did not lead to a lasting change in the relationship. With the fall of South Vietnam to the North Vietnamese on April 30, 1975, Kim Il-sung felt that the US had shown its weakness and that reunification of Korea under his regime was possible. Kim visited Beijing in May 1975 in the hope of gaining political and military support for this plan to invade South Korea again, but Mao Zedong refused. Despite public proclamations of support, Mao privately told Kim that China would be unable to assist North Korea because of the lingering after-effects of the Cultural Revolution throughout China, and because Mao had recently decided to restore diplomatic relations with the US. Meanwhile, North Korea emphasized its independent orientation by joining the Non-Aligned Movement in 1975. It promoted "Juche" as a model for developing countries to follow. It developed strong ties with the regimes of Bokassa in the Central African Republic, Macias Nguema in Equatorial Guinea, Idi Amin in Uganda, Pol Pot in Cambodia, Gaddafi in Libya, and Ceausescu in Romania. Reconstruction of the country after the war proceeded with extensive Chinese and Soviet assistance. Koreans with experience in Japanese industries also played a significant part. Land was collectivized between 1953 and 1958. Resistance appears to have been minimal as landlords had been eliminated by the earlier reforms or during the war. Although developmental debates took place within the Workers' Party of Korea in the 1950s, North Korea, like all the postwar communist states, undertook massive state investment in heavy industry, state infrastructure and military strength, neglecting the production of consumer goods. The first Three Year Plan (1954–1956) introduced the concept of "Juche" or self-reliance. The first Five Year Plan (1957-1961) consolidated the collectivization of agriculture and initiated mass mobilizations campaigns: the Chollima Movement, the Chongsan-ni system in agriculture and the Taean Work System in industry. The Chollima Movement was influenced by China's Great Leap Forward, but did not have its disastrous results. Industry was fully nationalized by 1959. Taxation on agricultural income was abolished in 1966. North Korea was placed on a semi-war footing, with equal emphasis being given to the civilian and military economies. This was expressed in the 1962 Party Plenum by the slogan, "Arms in one hand and a hammer and sickle in the other!" At a special party conference in 1966, members of the leadership who opposed the military build-up were removed. On the ruins left by the war, North Korea had built an industrialized command economy. Che Guevara, then a Cuban government minister, visited North Korea in 1960, and proclaimed it a model for Cuba to follow. In 1965, the British economist Joan Robinson described North Korea's economic development as a "miracle". As late as the 1970s, its GDP per capita was estimated to be equivalent to South Korea's. By 1968, all homes had electricity, though the supply was unreliable. By 1972, all children from age 5 to 16 were enrolled in school, and over 200 universities and specialized colleges had been established. By the early 1980s, 60–70% of the population was urbanized. In the 1970s, expansion of North Korea's economy, with the accompanying rise in living standards, came to an end. Compounding this was a decision to borrow foreign capital and invest heavily in military industries. North Korea's desire to lessen its dependence on aid from China and the Soviet Union prompted the expansion of its military power, which had begun in the second half of the 1960s. The government believed such expenditures could be covered by foreign borrowing and increased sales of its mineral wealth in the international market. North Korea invested heavily in its mining industries and purchased a large quantity of mineral extraction infrastructure from abroad. It also purchased entire petrochemical, textile, concrete, steel, pulp and paper manufacturing plants from the developed capitalist world. This included a Japanese-Danish venture that provided North Korea with the largest cement factory in the world. However, following the world 1973 oil crisis, international prices for many of North Korea's native minerals fell, leaving the country with large debts and an inability to pay them off and still provide a high level of social welfare to its people. North Korea began to default in 1974 and halted almost all repayments in 1985. As a result, it was unable to pay for foreign technology. Worsening this already poor situation, the centrally planned economy, which emphasized heavy industry, had reached the limits of its productive potential in North Korea. "Juche"s repeated demands that North Koreans learn to build and innovate domestically had run its course as had the ability of North Koreans to keep technological pace with other industrialized nations. By the mid to late-1970s some parts of the capitalist world, including South Korea, were creating new industries based around computers, electronics, and other advanced technology in contrast to North Korea's Stalinist economy of mining and steel production. Migration to urban areas stalled. Despite the emerging economic problems, the regime invested heavily in prestigious projects, such as the "Juche" Tower, the Nampo Dam, and the Ryugyong Hotel. In 1989, as a response to the 1988 Seoul Olympics, it held the 13th World Festival of Youth and Students in Pyongyang. In fact, the grandiosity associated with the regime and its personality cult, as expressed in monuments, museums, and events, has been identified as a factor in the economic decline. In 1984, Kim visited Moscow during a grand tour of the USSR where he met Soviet leader Konstantin Chernenko. Kim also made public visits to East Germany, Czechoslovakia, Poland, Hungary, Romania, Bulgaria and Yugoslavia. Soviet involvement in the North Korean economy increased, until 1988 when bilateral trade peaked at US$2.8 billion. In 1986, Kim met the incoming Soviet leader Mikhail Gorbachev in Moscow and received a pledge of support. However, Gorbachev's reforms and diplomatic initiatives, the Chinese economic reforms starting in 1979, and the collapse of the Eastern Bloc from 1989 to 1991 increased North Korea's isolation. The leadership in Pyongyang responded by proclaiming that the collapse of the Eastern Bloc communist governments demonstrated the correctness of the policy of "Juche". The collapse of the Soviet Union in 1991 deprived North Korea of its main source of economic aid, leaving China as the isolated regime's only major ally. Without Soviet aid, North Korea's economy went into a free-fall. By this time in the early 1990s, Kim Jong-il was already conducting most of the day-to-day activities of running of the state. Meanwhile, international tensions were rising over North Korea's quest for nuclear weapons. Former US president Jimmy Carter made a visit to Pyongyang in June 1994 in which he met with Kim, and returned proclaiming that he had resolved the crisis. Kim Il-sung died from a sudden heart attack on July 8, 1994, three weeks after the Carter visit. His son, Kim Jong-il, who had already assumed key positions in the government, succeeded as General Secretary of the Korean Workers' Party. At that time, North Korea had no secretary-general in the party nor a president. What minimal legal procedure had been established was summarily ignored. Although a new constitution appeared to end the wartime political system, it did not completely terminate the transitional military rule. Rather it legitimized and institutionalized military rule by making the National Defense Commission (NDC) the most important state organization and its chairman the highest authority. After three years of consolidating his power, Kim Jong-il became Chairman of the NDC on October 8, 1997, a position described by the NDC as the nation's "highest administrative authority", and thus North Korea's "de facto" head of state. His succession had been foreshadowed in 1980, when he was introduced to the public at the Sixth Party Congress. In 1982, Kim Jong-il had established himself as a leading theoretician with the publication of "On the Juche Idea". In 1984, he had been officially confirmed as his father's successor. Although the succession of Kim Jong-il coincided with much societal upheaval, and the succession is conventionally seen as a turning point of North Korean history, the change in leadership hardly had direct consequences. The politics in the last years of Kim Il-sung closely resemble those of the beginning of the Kim Jong-il era. The economy was in steep decline. In 1990–1995, foreign trade was cut in half, with the loss of subsidized Soviet oil being particularly keenly felt. The crisis came to a head in 1995 with widespread flooding that destroyed crops and infrastructure, leading to a famine that lasted until 1998. At the same time, there appeared to be little significant internal opposition to the regime. Indeed, a great many of the North Koreans fleeing to China because of famine still showed significant support for the government as well as pride in their homeland. Many of these people reportedly returned to North Korea after earning sufficient money. In 1998, the government announced a new policy called "Songun", or "Military First". Some analysts suggested that this meant the Korean People's Army was now more powerful than the Workers' Party. President Kim Dae-jung of South Korea actively attempted to reduce tensions between the two Koreas under the Sunshine Policy. After the election of George W. Bush as the President of the United States in 2000, North Korea faced renewed pressure over its nuclear program. On October 9, 2006, North Korea announced that it had successfully detonated a nuclear bomb underground. An official at South Korea's seismic monitoring center confirmed that a magnitude-3.6 tremor felt at the time was not a natural occurrence. Additionally, North Korea was running a missile development program. In 1998, North Korea tested a Taepodong-1 Space Launch Vehicle, which successfully launched but failed to reach orbit. On July 5, 2006, it tested a Taepodong-2 ICBM that reportedly could reach the west coast of the U.S. in the 2-stage version, or the entire U.S. with a third stage. However, the missile failed shortly after launch. On February 13, 2007, North Korea signed into an agreement with South Korea, the United States, Russia, China, and Japan, which stipulated North Korea would shut down its Yongbyon nuclear reactor in exchange for economic and energy assistance. However, in 2009 the North continued its nuclear test program. In 2010, the sinking of a South Korean naval ship, the Cheonan, allegedly by a North Korean torpedo, and North Korea's shelling of Yeonpyeong Island escalated tensions between North and South. Kim Jong-il died on December 17, 2011 and was succeeded by his son, Kim Jong-un. In late 2013, Kim Jong Un's uncle Jang Song-thaek was arrested and executed after a trial. According to the South Korean spy agency, Kim may have purged some 300 people after taking power. In 2014, the United Nations Commission of Inquiry accused the government of crimes against humanity. In 2015, North Korea adopted Pyongyang Standard Time (UTC+08.30), reversing the change to Japan Standard Time (UTC+9.00) which had been imposed by the Japanese Empire when it annexed Korea. As a result, North Korea was in a different time zone than South Korea. In 2016, 7th Congress of the Workers' Party of Korea was held in Pyongyang, the first party congress since 1980. In 2017, North Korea tested the Hwasong-15, an intercontinental ballistic missile capable of striking anywhere in the United States of America. Estimates of North Korea's nuclear arsenal at that time ranged between 15 and 60 bombs, probably including hydrogen bombs. In February 2018, North Korea sent an unprecedented high-level delegation to the Winter Olympics in South Korea, headed by Kim Yo-jong, sister of Kim Jong-un, and President Kim Yong-nam, which passed on an invitation to South Korean President Moon to visit the North. In April the two Korean leaders met at the Joint Security Area where they announced their governments would work towards a denuclearized Korean Peninsula and formalize peace between the two states. North Korea announced it would change its time zone to realign with the South. On June 12, 2018, Kim met American President Donald Trump at a summit in Singapore and signed a declaration, again affirming a commitment to peace and denuclearization. Trump announced that he would halt military exercises with South Korea and foreshadowed withdrawing American troops entirely. In September, South Korean President Moon visited Pyongyang for a summit with Kim. In February 2019 in Hanoi, a second summit between Kim and Trump broke down without an agreement. On June 30, 2019, Trump, Moon, and Kim met at the DMZ. Talks in Stockholm began in October between US and North Korean negotiating teams, but broke down after one day.
https://en.wikipedia.org/wiki?curid=21256
Geography of North Korea North Korea is located in East Asia on the Northern half of the Korean Peninsula. North Korea shares a border with three countries; China along the Amnok River, Russia along the Tumen River, and South Korea along the Korean Demilitarized Zone (DMZ). The Yellow Sea and the Korea Bay are off the west coast and the Sea of Japan (East Sea of Korea) is off the east coast. Most of North Korea is a series of medium-sized to large-sized mountain ranges and large hills, separated by deep, narrow valleys. The highest peak, Paektu-san on the volcanic Baekdu Mountain, is located on its northern border with China, and rises 9,002 ft. (2,744 m). Along the west coast there are wide coastal plains, while along the East Sea coastline (North Korea's lowest point at 0 m), narrow plains rise into mountains. Similar to South Korea, dozens of small islands dot the western coastline. North Korea's longest river is the Yulu (Yalu). Other large rivers include the Tumen, Taedong and Imjin. The terrain consists mostly of hills and mountains separated by deep, narrow valleys. The coastal plains are wide in the west and discontinuous in the east. Early European visitors to Korea remarked that the country resembled "a sea in a heavy gale" because of the many successive mountain ranges that crisscross the peninsula. Some 80 percent of North Korea's land area is composed of mountains and uplands, with all of the peninsula's mountains with elevations of or more located in North Korea. The great majority of the population lives in the plains and lowlands. Paektu Mountain, the highest point in North Korea at 2,743 m (9,003 ft), is a volcanic mountain near Manchuria with basalt lava plateau with elevations between and above sea level. The Hamgyong Range, located in the extreme northeastern part of the peninsula, has many high peaks, including Kwanmobong at approximately . Other major ranges include the Rangrim Mountains, which are located in the north-central part of North Korea and run in a north-south direction, making communication between the eastern and western parts of the country rather difficult; and the Kangnam Range, which runs along the North Korea–China border. Geumgangsan, often written Mt Kumgang, or Diamond Mountain, (approximately ) in the Thaebaek Range, which extends into South Korea, is famous for its scenic beauty. For the most part, the plains are small. The most extensive are the Pyongyang and Chaeryŏng plains, each covering about 500 km2. Because the mountains on the east coast drop abruptly to the sea, the plains are even smaller there than on the west coast. The mountain ranges in the northern and eastern parts of North Korea form the watershed for most of its rivers, which run in a westerly direction and empty into the Yellow Sea and Korea Bay. The longest is the Amnok River, which is navigable for 678 km of its . The Tuman River, one of the few major rivers to flow into the Sea of Japan, is the second longest at but is navigable for only because of the mountainous topography. The third longest river, the Taedong River, flows through Pyongyang and is navigable for 245 of its 397 km. Lakes tend to be small because of the lack of glacial activity and the stability of the Earth's crust in the region. Unlike neighboring Japan or northern China, North Korea experiences few severe earthquakes. The country has a number of natural spas and hot springs, which number 124 according to one North Korean source. North Korea has a combination of a continental climate and an oceanic climate, with four distinct seasons. Most of North Korea is classified as being of a humid continental climate within the Köppen climate classification scheme, with warm summers and cold, dry winters. In summer, there is a short rainy season called "changma". Long winters bring bitter cold and clear weather interspersed with snow storms as a result of northern and northwestern winds that blow from Siberia. The daily average high and low temperatures for Pyongyang in January are . On average, it snows thirty-seven days during the winter. Winter can be particularly harsh in the northern, mountainous regions. Summer tends to be short, hot, humid, and rainy because of the southern and southeastern monsoon winds that bring moist air from the Pacific Ocean. Spring and autumn are transitional seasons marked by mild temperatures and variable winds and bring the most pleasant weather. The daily average high and low temperatures for Pyongyang in August are . On average, approximately 60% of all precipitation occurs from June to September. Natural hazards include late spring droughts which are often followed by severe flooding. Typhoons affect the peninsula on an average of at least once every summer or early autumn. The drought that started in June 2015, according to the Korean Central News Agency, has been the worst seen in 100 years. The environment of North Korea is diverse, encompassing alpine, forest, farmland, freshwater, and marine ecosystems. In recent years, the environment has been reported to be in a state of "crisis", "catastrophe", or "collapse". Cultivation, logging, and natural disasters have all put pressure on North Korea's forests. During the economic crisis of the 1990s, deforestation accelerated, as people turned to the woodlands to provide firewood and food. This in turn has led to soil erosion, soil depletion, and increased risk of flooding. In response, the government has promoted a tree planting program. Based on satellite imagery, it has been estimated that 40 percent of forest cover has been lost since 1985. North Korea has an area of 120,538 km², of which 120,408 km² is land and 130 km² is water. It has of land boundaries; of these, are with China, are with South Korea, and are with Russia. The Korean Peninsula extends about southward from the northeast Asian continental landmass. The coastline of Korea is highly irregular, and North Korea accounts for of this, roughly one-third. Some 3579 islands lie adjacent to the Korean Peninsula, mostly along the south and west coasts. The southern stretch of its east coast forms the northern side of the East Korea Bay. At the headland Musu Dan, this ends and the coast turns sharply northward. The North Korean government claims territorial waters extending from shore. It also claims an exclusive economic zone from shore. In addition, a maritime military boundary that lies offshore in the Sea of Japan (East Sea of Korea) and offshore in the Yellow Sea demarcates the waters and airspace into which foreign ships and planes are prohibited from entering without permission. Waters of the Yellow Sea are demarcated between North Korea and South Korea by the disputed Northern Limit Line drawn by the United Nations Command (Korea) in early 1950s and not officially recognized by North Korea. Disputes between North and South Korean naval vessels have occurred in this area. A total of five disputes were noteworthy enough to have been reported in the news (three in 2009 and two in 2010). Natural resources include coal, petroleum, lead, tungsten, zinc, graphite, magnesite, iron ore, copper, gold, pyrites, salt, fluorspar and hydropower. Lists:
https://en.wikipedia.org/wiki?curid=21257
Demographics of North Korea The demographics of North Korea are known through national censuses and international estimates. The Central Bureau of Statistics of North Korea conducted the most recent census in 2008, where the population reached 24 million inhabitants. The population density is 199.54 inhabitants per square kilometre, and the 2014 estimated life expectancy is 69.81 years. In 1980, the population rose at a near consistent, but low, rate (0.84% from the two censuses). Since 2000, North Korea's birth rate has exceeded its death rate; the natural growth is positive. In terms of age structure, the population is dominated by the 15–64-year-old segment (68.09%). The median age of the population is 32.9 years, and the gender ratio is 0.95 males to 1.00 female. Nowadays, North Korean women have on average 2 children, against 3 in the early 1980s. According to "The World Factbook", North Korea is racially homogeneous and contains a small Chinese community and a few ethnic Japanese. The 2008 census listed two nationalities: Korean (%) and Other (%). Korea was annexed by the Empire of Japan in 1910, in which the Korean Peninsula was occupied by Japanese. In 1945, when Japan was defeated in World War II, Korea was divided into two occupied zones: North occupied by the Soviet Union and the South by the United States. Negotiations on unification failed, and in 1948 two separate countries were formed: North and South Korea. Korean is the official language of North Korea. "The World Factbook" states "traditionally Buddhist and Confucianist, some Christian and syncretic Chondogyo" in regards to religion, but also states "autonomous religious activities now almost nonexistent; government-sponsored religious groups exist to provide illusion of religious freedom". , 8.86% of the population older than 5 years old have attained academic degrees. In 2000, North Korea spent 38.2% of its expenditures on education, social insurance, and social security. Estimates show that, in 2012, gross domestic product (GDP) per capita was $1,800. The most significant sources of employment were machine building and manufacturing of metallurgical products, military products, and textiles. In 2006, the unemployment rate was between 14.7% and 36.5%. The 2008 census enumerated 5,887,471 households, averaging 3.9 persons per house. Average urbanization rate was 60.3% in 2011. During the North Korean famine of 1994-1998 somewhere between 240,000 and 3,500,000 North Koreans died from starvation or hunger-related illnesses, with the deaths peaking in 1997. A 2011 U.S. Census Bureau report put the likely number of excess deaths during 1993 to 2000 at from 500,000 to 600,000. Until the release of official data in 1989, the 1963 edition of the North Korea Central Yearbook was the last official publication to disclose population figures. After 1963 demographers used varying methods to estimate the population. They either totaled the number of delegates elected to the Supreme People's Assembly (each delegate representing 50,000 people before 1962 and 30,000 people afterward) or relied on official statements that a certain number of persons, or percentage of the population, was engaged in a particular activity. Thus, on the basis of remarks made by President Kim Il-sung in 1977 concerning school attendance, the population that year was calculated at 17.2 million persons. During the 1980s, health statistics, including life expectancy and causes of mortality, were gradually made available to the outside world. In 1989 the Central Bureau of Statistics released demographic data to the United Nations Fund for Population Activities (UNFPA) to secure the UNFPA's assistance in holding North Korea's first nationwide census since the establishment of the DPRK in 1946. Although the figures given to the United Nations (UN) might have been purposely distorted, it appears that in line with other attempts to open itself to the outside world, the North Korean regime has also opened somewhat in the demographic realm. Although the country lacks trained demographers, accurate data on household registration, migration, and births and deaths are available to North Korean authorities. According to the United States scholar Nicholas Eberstadt and demographer Judith Banister, vital statistics and personal information on residents are kept by agencies on the ri, or ni (, : village, the local administrative unit) level in rural areas and the dong (, : district or block) level in urban areas. The next census is scheduled for 2018. In their 1992 monograph, "The Population of North Korea", Eberstadt and Banister use the data given to the UNFPA and make their own assessments. They place the total population at 21.4 million persons in mid-1990, consisting of 10.6 million males and 10.8 million females. This figure is close to an estimate of 21.9 million persons for mid-1988 cited in the 1990 edition of the "Demographic Yearbook" published by the UN. "Korean Review", a book by Pan Hwan Ju published by the Pyongyang Foreign Languages Press in 1987, gives a figure of 19.1 million persons for 1986. The figures disclosed by the government reveal an unusually low proportion of males to females: in 1980 and 1987, the male-to-female ratios were 86.2 to 100, and 84.2 to 100, respectively. Low male-to-female ratios are usually the result of a war, but these figures were lower than the sex ratio of 88.3 males per 100 females recorded for 1953, the last year of the Korean War. The male-to-female ratio would be expected to rise to a normal level with the passage of years, as happened between 1953 and 1970, when the figure was 95.1 males per 100 females. After 1970, however, the ratio declined. Eberstadt and Banister suggest that before 1970 male and female population figures included the whole population, yielding ratios in the ninetieth percentile, but that after that time the male military population was excluded from population figures. Based on the figures provided by the Central Statistics Bureau, Eberstadt and Banister estimate that the actual size of the "hidden" male North Korean military had reached 1.2 million by 1986 and that the actual male-to-female ratio was 97.1 males to 100 females in 1990. If their estimates are correct, 6.1 percent of North Korea's total population was in the military, numerically the world's fifth largest military force, in the late 1980s (fourth largest ). A survey in 2017 found that the famine had skewed North Korea's demography, impacting particularly on boy babies. Women aged 20-24 made up 4% of the population, while men in the same age group made up only 2.5%. The annual population growth rate in 1960 was 2.7 percent, rising to a high of 3.6 percent in 1970, and falling to 1.9 percent in 1975. This fall reflected a dramatic decline in the fertility rate: the average number of children born to women decreased from 6.5 in 1966 to 2.5 in 1988. Assuming the data is reliable, reasons for falling growth rates and fertility rates probably include late marriage, urbanization, limited housing space, and the expectation that women would participate equally in work hours in the labor force. The experience of other socialist countries suggests that widespread labor force participation by women often goes hand-in-hand with more traditional role expectations; in other words, they are still responsible for housework and childrearing. The high percentage of males age 17 to 26 may have contributed to the low fertility rate. According to Eberstadt and Banister's data, the annual population growth rate in 1991 was 1.9 percent. However, the CIA World Factbook estimated that North Korea's annual population growth rate was 1.0% in 1991 and that it has since declined to 0.4% by 2009. The North Korean government seems to perceive its population as too small in relation to that of South Korea. In its public pronouncements, Pyongyang has called for accelerated population growth and encouraged large families. According to one Korean American scholar who visited North Korea in the early 1980s, the country has no birth control policies; parents are encouraged to have as many as six children. The state provides "t'agaso" (nurseries) to lessen the burden of childrearing for parents and offers a 77-day paid leave after childbirth. Eberstadt and Banister suggest, however, that authorities at the local level make contraceptive information readily available to parents and that intrauterine devices are the most commonly adopted birth control method. An interview with a former North Korean resident in the early 1990s revealed that such devices are distributed free at clinics. Demographers determine the age structure of a given population by dividing it into five-year age-groups and arranging them chronologically in a pyramidlike structure that "bulges" or recedes in relation to the number of persons in a given age cohort. Many poor, developing countries have a broad base and steadily tapering higher levels, which reflects a large number of births and young children but much smaller age cohorts in later years as a result of relatively short life expectancies. North Korea does not entirely fit this pattern; data reveal a "bulge" in the lower ranges of adulthood. In 1991, life expectancy at birth was approximately 66 years for males, almost 73 for females. It is likely that annual population growth rates will increase, as well as difficulties in employing the many young men and women entering the labor force in a socialist economy already suffering from stagnant growth. Eberstadt and Banister project that the population will stabilize (that is, cease to grow) at 34 million persons in 2045 and will then experience a gradual decline. North Korea's population is concentrated in the plains and lowlands. The least populated regions are the mountainous Chagang and Yanggang provinces adjacent to the Chinese border. The largest concentrations of population are in North P'yŏngan and South P'yŏngan provinces, in the municipal district of Pyongyang, and in South Hamgyŏng Province, which includes the Hamhŭng-Hŭngnam urban area. Eberstadt and Banister calculate the average population density at 167 persons per square kilometer, ranging from 1,178 persons per square kilometer in Pyongyang Municipality to 44 persons per square kilometer in Yanggang Province. By contrast, South Korea had an average population density of 425 persons per square kilometer in 1989. Like South Korea, North Korea has experienced significant urban migration since the end of the Korean War. Official statistics reveal that 59.6 percent of the total population was classified as urban in 1987. This figures compares with only 17.7 percent in 1953. It is not entirely clear, however, what standards are used to define urban populations. Eberstadt and Banister suggest that although South Korean statisticians do not classify settlements of under 50,000 as urban, their North Korean counterparts include settlements as small as 20,000 in this category. And, in North Korea, people who engage in agricultural pursuits inside municipalities sometimes are not counted as urban. Urbanization in North Korea seems to have proceeded most rapidly between 1953 and 1960, when the urban population grew between 12 and 20 percent annually. Subsequently, the increase slowed to about 6 percent annually in the 1960s and between 1 and 3 percent from 1970 to 1987. In 1987, North Korea's largest cities were Pyongyang, with approximately 2.3 million inhabitants; Hamhŭng, 701,000; Ch'ŏngjin, 520,000; Namp'o, 370,000; Sunch'ŏn, 356,000; and Sinŭiju, 289,000. In 1987, the total national population living in Pyongyang was 11.5 percent. The government restricts and monitors migration to cities and ensures a relatively balanced distribution of population in provincial centers in relation to Pyongyang. Source: Population Division of the United Nations Department of Economic and Social Affairs Births and deaths Life expectancy Average life expectancy at age 0 of the total population. Source: Central Bureau of Statistics Large-scale emigration from Korea began around 1904 and continued until the end of World War II. During the Japanese colonial occupation (1910–45), many Koreans emigrated to Northeast China, other parts of China, the Soviet Union, Hawaii, and the contiguous United States. People from Korea's northern provinces went primarily to Manchuria, China, and Siberia; many of those from the southern provinces went to Japan. Most emigrants left for economic reasons because employment opportunities were scarce; many Korean farmers had lost their land after the Japanese colonial government introduced a system of private land tenure, imposed higher land taxes, and promoted the growth of an absentee landlord class charging exorbitant rents. In the 1980s, more than 4 million ethnic Koreans lived outside the peninsula. The largest group, about 1.7 million people, lived in China (see Koreans in China); most had assumed Chinese citizenship. Approximately 1 million Koreans, almost exclusively from South Korea, lived in North America (see Korean Americans). About 389,000 ethnic Koreans resided in the former Soviet Union (see Koryosaram and Sakhalin Koreans). One observer noted that Koreans have been so successful in running collective farms in Soviet Central Asia that being Korean is often associated by other citizens with being rich. As a result, there is growing antagonism against Koreans. Smaller groups of Koreans are found in Central America and South America (85,000), the Middle East (62,000), Europe (40,000), Asia (27,000), and Africa (25,000). Many of Japan's approximately 680,000 Koreans have below average standards of living. This is partly because of discrimination by the Japanese. Many resident Koreans, loyal to North Korea, remain separate from, and often hostile to, the Japanese social mainstream. The pro-North Korean Chongryon (General Association of Korean Residents in Japan, known as Chosen Soren or Chosoren in Japanese) initially was more successful than the pro-South Korean Mindan (Association for Korean Residents in Japan) in attracting adherents. However, the widening disparity between the political and economic conditions of the two Koreas has since made Mindan the larger and certainly the less politically controversial faction. In addition, third- and fourth-generation Zainichi Chosenjin have largely given up active participation or loyalty to the Chongryon ideology. Reasons stated for this increased disassociation include widespread mainstream tolerance of Koreans by Japanese in recent years, greatly reducing the need to rely on Chongryon and the increasing unpopularity of Kim Jong Il even among loyal members of Chongryon. Between 1959 and 1982, Chongryon encouraged the repatriation of Korean residents in Japan to North Korea. More than 93,000 Koreans left Japan, the majority (80,000 persons) in 1960 and 1961. Thereafter, the number of repatriates declined, apparently because of reports of hardships suffered by their compatriots. Approximately 6,637 Japanese wives accompanied their husbands to North Korea, of whom about 1,828 retained Japanese citizenship in the early 1990s. Pyongyang had originally promised that the wives could return home every two or three years to visit their relatives. In fact, however, they are not allowed to do so, and few have had contact with their families in Japan. In normalization talks between North Korean and Japanese officials in the early 1990s, the latter urged unsuccessfully that the wives be allowed to make home visits. According to a defector, himself a former returnee, many petitioned to be returned to Japan and in response were sent to political prison camps. Japanese research puts the number of Zainichi Korean returnees condemned to prison camps at around 10,000. The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. Population: 25,115,311 (July 2016 est.) Age structure "0–14 years:" 20.97% (male 2,678,638/female 2,588,744) "15–24 years:" 15.88% (male 2,009,360/female 1,977,942) "25–54 years:" 44.22% (male 5,567,682/female 5,537,077) "55–64 years:" 9.19% (male 1,090,739/female 1,218,406) "65 years and over:" 9.74% (male 840,003/female 1,606,720) (2016 est.) Population growth rate 1.02% (1991 est.) 0.31% (1996 est.) 0.87% (2006 est.) 0.42% (2009 est.) 0.53% (2016 est.) Birth rate 20.01 births/1,000 population (1991 est.) 17.58 births/1,000 population (1996 est.) 14.61 births/1,000 population (2006 est.) 14.61 births/1,000 population (2008 est.) 14.60 births/1,000 population (2016 est.) Death rate 8.94 deaths/1,000 population (1991 est.) 9.52 deaths/1,000 population (1996 est.) 7.29 deaths/1,000 population (2006 est.) 7.29 deaths/1,000 population (2008 est.) 9.30 deaths/1,000 population (2016 est.) Net migration rate: 0 migrant(s)/1,000 population (2016 est.) Sex ratio "at birth:" 1.05 male(s)/female "0–14 years:" 1.03 male(s)/female "15–24 years:" 1.02 male(s)/female "25–54 years:" 1.01 male(s)/female "55–64 years:" 0.9 male(s)/female "65 years and over:" 0.53 male(s)/female "total population:" 0.94 male(s)/female (2016 est.) Infant mortality rate total: 22.9 deaths/1,000 live births (2016 est.) Life expectancy at birth "total population:" 70.4 years "male:" 66.6 years "female:" 74.5 years (2016 est.) Total fertility rate 2.09 children born/woman (2006 est.) 1.94 children born/woman (2010 est.) 1.96 children born/woman (2016 est.) Nationality "noun:" Korean(s) "adjective:" Korean Ethnic groups racially homogeneous: Koreans; small Chinese community, a few ethnic Japanese, Singaporeans, ethnic Thai, ethnic Indian, ethnic African, Americans and ethnic Vietnamese Religion: no statistics available; predominantly Cheondoism (see "religion in North Korea") Language: Korean Literacy "definition:" age 15 and over can read and write Korean using the Korean script Hangul "total population:" 100% "male:" 100% "female:" 100% (2015 est.)
https://en.wikipedia.org/wiki?curid=21258
Politics of North Korea The politics of North Korea (officially the Democratic People's Republic of Korea or DPRK) takes place within the framework of the official state philosophy, "Juche", a concept created by Hwang Jang-yop and later attributed to Kim Il-sung. The "Juche" theory is the belief that only through self-reliance and a strong independent state, can true socialism be achieved. North Korea's political system is built upon the principle of centralization. While the North Korean constitution formally guarantees protection of human rights, in practice there are severe limits on freedom of expression, and the government closely supervises the lives of North Korean citizens. The constitution defines North Korea as "a dictatorship of people's democracy" under the leadership of the Workers' Party of Korea (WPK), which is given legal supremacy over other political parties. The WPK is the ruling party of North Korea. It has been in power since its creation in 1948. Two minor political parties also exist, but are legally bound to accept the ruling role of the WPK. They, with the WPK, comprise a popular front, known as the Democratic Front for the Reunification of the Fatherland (DFRF). Elections occur only in single-candidate races where the candidate is effectively selected beforehand by the WPK. In addition to the parties, there are over 100 mass organizations controlled by the WPK. Those who are not WPK members are required to join one of these organizations. Of these, the most important ones are the Kimilsungist-Kimjongilist Youth League, Socialist Women's Union of Korea, General Federation of Trade Unions of Korea, and Union of Agricultural Workers of Korea. These four organizations are also DFRF members. Kim Il-sung ruled the country from 1948 until his death in July 1994, holding the offices of General Secretary of the WPK from 1949 to 1994 (titled as Chairman from 1949 to 1972), Premier of North Korea from 1948 to 1972 and President from 1972 to 1994. He was succeeded by his son, Kim Jong-il. While the younger Kim had been his father's designated successor since the 1980s, it took him three years to consolidate his power. He was named to his father's old post of General Secretary in 1997, and in 1998 became chairman of the National Defence Commission (NDC), which gave him command of the armed forces. The constitution was amended to make the NDC chairmanship "the highest post in the state." At the same time, the presidential post was written out of the constitution, and Kim Il-sung was designated "Eternal President of the Republic" in order to honor his memory forever. Most analysts believe the title to be a product of the cult of personality he cultivated during his life. Outside observers generally view North Korea as a Stalinist dictatorship particularly noting the elaborate cult of personality around Kim Il-sung and his family. The Workers' Party of Korea (WPK), led by a member of the ruling family, holds power in the state and leads the Democratic Front for the Reunification of the Fatherland of which all political officers are required to be members. The government has formally replaced all references to Marxism–Leninism in its constitution with the locally developed concept of "Juche", or self-reliance. In recent years, there has been great emphasis on the "Songun" or "military-first" philosophy. All references to communism were removed from the North Korean constitution in 2009. The status of the military has been enhanced, and it appears to occupy the center of the North Korean political system; all the social sectors are forced to follow the military spirit and adopt military methods. Kim Jong-il's public activity focused heavily on "on-the-spot guidance" of places and events related to the military. The enhanced status of the military and military-centered political system was confirmed at the first session of the 10th Supreme People's Assembly (SPA) by the promotion of NDC members into the official power hierarchy. All ten NDC members were ranked within the top twenty on 5 September, and all but one occupied the top twenty at the fiftieth anniversary of the Day of the Foundation of the Republic on 9 September. According to the Constitution of North Korea, the country is a democratic republic and the Supreme People's Assembly (SPA) and Provincial People's Assemblies (PPA) are elected by direct universal suffrage and secret ballot. Suffrage is guaranteed to all citizens aged 17 and over. In reality, elections in North Korea are for show and feature single-candidate races only. Those who want to vote against the sole candidate on the ballot must go to a special booth - in the presence of an electoral official - to cross out the candidate's name before dropping it into the ballot box—an act which, according to many North Korean defectors, is far too risky to even contemplate. All elected candidates are members of the Democratic Front for the Reunification of the Fatherland (DFRF), a popular front dominated by the ruling Workers' Party of Korea (WPK). The two minor parties in the coalition are the Chondoist Chongu Party and the Korean Social Democratic Party; they also have a few elected officials. The WPK exercises direct control over the candidates selected for election by members of the other two parties. In the past, elections were contested by other minor parties as well, including the Korea Buddhist Federation, Democratic Independent Party, Dongro People's Party, Gonmin People's Alliance, and People's Republic Party. Originally a close ally of Joseph Stalin's Soviet Union, North Korea has increasingly emphasized "Juche", an adoption of socialist self-reliance, which roots from Marxism–Leninism, its adoption of a certain ideological form of Marxism-Leninism is specific to the conditions of North Korea. "Juche" was enshrined as the official ideology when the country adopted a new constitution in 1972. In 2009, the constitution was amended again, quietly removing the brief references to communism (). However, North Korea continues to see itself as part of a worldwide leftist movement. The Workers' Party maintains a relationship with other leftist parties, sending a delegation to the International Meeting of Communist and Workers' Parties. North Korea has a strong relationship with Cuba; in 2016, the North Korean government declared three days of mourning period for Fidel Castro's death. For much of its history, North Korean politics have been dominated by its adversarial relationship with South Korea. During the Cold War, North Korea aligned with the Soviet Union and the People's Republic of China. The North Korean government invested heavily in its military, hoping to develop the capability to reunify Korea by force if possible and also preparing to repel any attack by South Korea or the United States. Following the doctrine of "Juche", North Korea aimed for a high degree of economic independence and the mobilization of all the resources of the nation to defend Korean sovereignty against foreign powers. In the wake of the collapse of the Soviet Union in the early 1990s and the loss of Soviet aid, North Korea faced a long period of economic crisis, including severe agricultural and industrial shortages. North Korea's main political issue has been to find a way to sustain its economy without compromising the internal stability of its government or its ability to respond to perceived external threats. Recently, North Korean efforts to improve relations with South Korea to increase trade and to receive development assistance have been mildly successful. North Korea has tried to improve its relations with South Korea by participating in the Pyeongchang Olympics (North Korea at the 2018 Winter Olympics), when Kim Jong-un sent his band and a few officials to visit South Korea. But North Korea's determination to develop nuclear weapons and ballistic missiles has prevented stable relations with both South Korea and the United States. North Korea has also experimented with market economics in some sectors of its economy, but these have had limited impact. Although there are occasional reports of signs of opposition to the government, these appear to be isolated, and there is no evidence of major internal threats to the current government. Some foreign analysts have pointed to widespread starvation, increased emigration through North Korea-China border, and new sources of information about the outside world for ordinary North Koreans as factors pointing to an imminent collapse of the regime. However, North Korea has remained stable in spite of more than a decade of such predictions. The Workers' Party of Korea maintains a monopoly on political power and Kim Jong-il remained the leader of the country until 2011, ever since he first gained power following the death of his father. After the death of Kim Il-Sung in 1994, his son, Kim Jong-Il reigned as the new leader, which marked the closure of one chapter of North Korean politics. Combined with external shocks and less charismatic personality of Kim Jong-Il, the transition of the leadership caused North Korea toward less centralized control. There are three key institutions: the Korean People's Army (KPA), the Korean Workers’ Party (KWP), and the cabinet. Rather than dominate a unified system as his father had, each party has their own enduring goals, therefore providing checks and balances to the government. No one party could claim victory and power over the other ones. With changing internal situation, combined with external pressure, the cabinet started to endorse policies it had rejected for years. North Korea politics is gradually becoming more open and negotiable with foreign countries. The fact that the leader of North Korea is willing to talk with other leaders shows a huge step towards peace and negotiation. According to Cheong Seong-chang of Sejong Institute, speaking on 25 June 2012, there is some possibility that the new leader Kim Jong-un, who has greater visible interest in the welfare of his people and engages in greater interaction with them than his father did, will consider economic reforms and normalization of international relations. In June 2011, it was reported that the government had ordered universities to cancel most classes until April 2012, sending students to work on construction projects, presumably for fear of similar developments as in North Africa. In the previous months, the regime had ordered anti-riot gear from China. However, "as soon as universities were reopened, graffiti appeared again. Perhaps the succession is not the real reason, but greater awareness among North Koreans could lead to changes." After the death of Kim Jong-il on December 17, 2011, his son, Kim Jong-un inherited the political leadership of the DPRK. The succession of power was immediate: Kim Jong-un became supreme commander of the Korean People's Army (KPA) on December 30, 2011, was appointed secretary of the Korean Workers Party (KWP) on April 11, 2012, and was entitled chairman of the National Defense Commission (NDC) two days later. To gain complete political power, he became the rank of marshal of the KPA. Up until his death, Kim Jong-il maintained a strong national military-first political system that equated stability with military power. Kim Jong-un continues to carry on the militarized political style of his father, but with less commitment to complete military rule. Since he took power, Kim Jong-un has attempted to move political power away from the KPA and has divided it among the WPK and the cabinet. Because of his political lobbying, the WPK's Central Committee has vastly shifted power in April 2012: out of 17 members and 15 alternates of the Committee, only five members and six alternates derive from military and security sectors. Ever since, the economic power of the WPK, the cabinet, and the KPA has been in a tense balance. The KPA has lost a significant amount of economic influence because of the current regime, which continually shifts from what Kim Jong-il built his regime on, and may cause later internal issues.
https://en.wikipedia.org/wiki?curid=21259
Economy of North Korea The economy of North Korea is a centrally planned system, where the role of market allocation schemes is limited, although increasing. , North Korea continues its basic adherence to a centralized command economy. There has been some economic liberalization, particularly after Kim Jong-un assumed the leadership in 2012, but reports conflict over particular legislation and enactment. According to economic freedom ranking by Heritage Foundation, North Korea's economic freedom score is 5.9, making it the least free of the 180 economies measured in the 2019 Index. The collapse of the Eastern Bloc from 1989 to 1991, particularly North Korea's principal source of support, the Soviet Union, forced the North Korean economy to realign its foreign economic relations, including increased economic exchanges with South Korea. China is North Korea's largest trading partner. North Korea's ideology of Juche has resulted in the country pursuing autarky in an environment of international sanctions. While the current North Korean economy is still dominated by state-owned industry and collective farms, foreign investment and corporate autonomy have slightly increased. North Korea had a similar GDP per capita to its neighbor South Korea from the aftermath of the Korean War until the mid-1970s, but had a GDP per capita of less than $2,000 in the late 1990s and early 21st century. For 2018 the Bank of Korea estimated the GDP growth as −4.1%. In 2019, North Korea was ranked 172nd in the Transparency International Corruption Perceptions Index with a score of 17 out of 100. Estimating gross national product in North Korea is a difficult task because of a dearth of economic data and the problem of choosing an appropriate rate of exchange for the North Korean won, the nonconvertible North Korean currency. The South Korean government's estimate placed North Korea's GNP in 1991 at US$22.9 billion, or US$1,038 per capita. In contrast, South Korea posted US$237.9 billion of GNP and a per capita income of US$5,569 in 1991. North Korea's GNP in 1991 showed a 5.2% decline from 1989, and preliminary indications were that the decline would continue. South Korea's GNP, by contrast, expanded by 9.3% and 8.4%, respectively, in 1990 and 1991. It is estimated that North Korea's GNP nearly halved between 1990 and 1999. North Korean annual budget reports suggest state income roughly tripled between 2000 and 2014. By about 2010 external trade had returned to 1990 levels. The South Korea-based Bank of Korea estimated that over 2000 to 2013 average growth was 1.4% per year. It estimated that the real GDP of North Korea in 2015 was 30,805 billion South Korean won. It has published the following estimates of North Korea's GDP growth: According to analyst Andrei Lankov, writing in 2017, a significant number of observers believe that the Bank of Korea is too conservative and the real growth rate is North Korea reported that the government budget has been increasing at between 5% and 10% annually from 2007 to 2015. Reported planned capital expenditure, mainly on roads and public buildings, increased by 4.3% in 2014, 8.7% in 2015 to 13.7% in 2016. According to a North Korea economist, the growth rate was 3.7% in 2017, lifting GDP to $29.6 billion in 2018. The Australian government estimated 1.3% growth in 2017, while the South Korean government estimated -3.5%. In 2018, North Korea's government budget revenue plan overfulfilled 1.4%, an increase of 4.6% over 2017 year. Beginning in the mid-1920s, the Japanese colonial administration in Korea concentrated its industrial-development efforts in the comparatively under-populated and resource-rich northern portion of the country, resulting in a considerable movement of people northward from the agrarian southern provinces of the Korean Peninsula. This trend did not reverse until after the end (1945) of World War II, when more than 2 million Koreans moved from North to South following the division of Korea into Soviet and American military zones of administration. This southward exodus continued after the establishment of the Democratic People's Republic of Korea (North Korea) in 1948 and during the 1950–53 Korean War. The North Korean population as of October 2008 was given as 24 million. The post-World War II division of the Korean Peninsula resulted in imbalances of natural and human resources, with disadvantages for both the North and the South. In 1945, about 80% of Korean heavy industry was in the North but only 31% of light industry, 37% of agriculture, and 18% of the peninsula's total commerce. North and South Korea both suffered from the massive destruction caused during the Korean War. Historian Charles K. Armstrong stated that "North Korea had been virtually destroyed as an industrial society". In the years immediately after the war, North Korea mobilized its labour force and natural resources in an effort to achieve rapid economic development. Large amounts of aid from other communist countries, notably the Soviet Union and the People's Republic of China, helped the country achieve a high growth-rate in the immediate postwar period. In 1961 an ambitious seven-year plan was launched to continue industrial expansion and increase living standards, but within three years it became clear this was failing and the plan period was extended to 1970. The failure was due to reduced support from the Soviet Union when North Korea aligned more with China, and military pressure from the U.S. leading to increased defence spending. In 1965 South Korea's rate of economic growth first exceeded North Korea's in most industrial areas, though South Korea's per capita GNP remained lower than North Korea's. In 1979, North Korea renegotiated much of its international debt, but in 1980 it defaulted on its loans except those from Japan. By the end of 1986, hard-currency debt had reached more than US$1 billion. It also owed nearly $2 billion to communist creditors, principally the Soviet Union. The Japanese declared North Korea in default. By 2000, taking into account penalties and accrued interest, North Korea's debt was estimated at $10–12 billion. By 2012, North Korea's external debt had grown to an estimated US$20 billion despite Russia reportedly writing off about $8 billion of debt in exchange for participation in natural resources development. Besides Russia, major creditors included Hungary, the Czech Republic and Iran. Largely because of these debt problems and because of a prolonged drought and mismanagement, North Korea's industrial growth slowed, and per capita GNP fell below that of the South. By the end of 1979 per capita GNP in North Korea was about one-third of that in the South. The causes for this relatively poor performance are complex, but a major factor is the disproportionately large percentage of GNP (possibly as much as 25%) that North Korea devotes to the military. There were minor efforts toward relaxing central control of the economy in the 1980s that involve industrial enterprises. Encouraged by Kim Jong-il's call to strengthen the implementation of the independent accounting system (, "tongnip ch'aesanje") of enterprises in March 1984, interest in enterprise management and the independent accounting system increased, as evidenced by increasing coverage of the topic in North Korean journals. Under the system, factory managers still are assigned output targets but are given more discretion in decisions about labour, equipment, materials, and funds. In addition to fixed capital, each enterprise is allocated a minimum of working capital from the state through the Central Bank and is required to meet operating expenses with the proceeds from sales of its output. Up to 50% of the "profit" is taxed, the remaining half being kept by the enterprise for purchase of equipment, introduction of new technology, welfare benefits, and bonuses. As such, the system provides some built-in incentives and a degree of micro-level autonomy, unlike the budget allocation system, under which any surplus is turned over to the government in its entirety. Another innovation, the August Third People's Consumer Goods Production Movement, is centred on consumer goods production. This measure was so named after Kim Jong-il made an inspection tour of an exhibition of light industrial products held in Pyongyang on August 3, 1984. The movement charges workers to use locally available resources and production facilities to produce needed consumer goods. On the surface, the movement does not appear to differ much from the local industry programs in existence since the 1960s, although some degree of local autonomy is allowed. However, a major departure places output, pricing, and purchases outside central planning. In addition, direct sales stores were established to distribute goods produced under the movement directly to consumers. The movement is characterized as a third sector in the production of consumer goods, alongside centrally controlled light industry and locally controlled traditional light industry. Moreover, there were some reports in the mid-1980s of increasing encouragement of small-scale private handicrafts and farm markets. As of 1992, however, no move was reported to expand the size of private garden plots. All these measures appear to be minor stop-gap measures to alleviate severe shortages of consumer goods by infusing some degree of incentives. In mid-1993, no significant moves signalling a fundamental deviation from the existing system had occurred. The reluctance to initiate reform appears to be largely political. This concern is based on the belief that economic reform will produce new interests that will demand political expression and that demands for the institutionalization of such pluralism eventually will lead to political liberalization. Beginning in the mid-1980s and particularly around the end of the decade, North Korea slowly began to modify its rigid self-reliant policy. The changes, popularly identified as the open-door policy, included an increasing emphasis on foreign trade, a readiness to accept direct foreign investment by enacting a joint venture law, the decision to open the country to international tourism, and economic cooperation with South Korea. The main targets of the Third Seven-Year Plan of 1987–1993 were to achieve the "Ten Long-Range Major Goals of the 1980s for the Construction of the Socialist Economy". These goals, conceived in 1980, were to be fulfilled by the end of the decade. The fact that these targets were rolled over to the end of the Third Seven-Year Plan is another indication of the disappointing economic performance during the Second Seven-Year Plan. The three policy goals of self-reliance, modernization, and scientification were repeated. Economic growth was set at 7.9% annually, lower than the previous plan. Although achieving the ten major goals of the 1980s was the main thrust of the Third Seven-Year Plan, some substantial changes have been made in specific quantitative targets. For example, the target for the annual output of steel was reduced by a third: from 15 million tons to 10 million tons. The output targets of cement and non-ferrous metals—two major export items—have been increased significantly. The June 1989 introduction of the Three-Year Plan for Light Industry as part of the Third Seven-Year Plan is intended to boost the standard of living by addressing consumer needs. The Third Seven-Year Plan gave a great deal of attention to developing foreign trade and joint ventures, the first time a plan has addressed these issues. By the end of 1991, however, two years before the termination of the plan, no quantitative plan targets were made public, an indication that the plan has not fared well. The diversion of resources to build highways, theatres, hotels, airports, and other facilities to host the Thirteenth World Festival of Youth and Students in July 1989 must have had a negative impact on industrial and agricultural development, although the expansion and improvement of social infrastructure have resulted in some long-term economic benefits. Although general economic policy objectives are decided by the Central People's Committee (CPC), it is the task of the State Planning Committee to translate the broad goals into specific annual and long-term development plans and quantitative targets for the economy as a whole, as well as for each industrial sector and enterprise. Under the basic tenets of the 1964 reforms, the planning process is guided by the principles of "unified planning" (, "ilwŏnhwa") and of "detailed planning" (, "saebunhwa"). Under "unified planning", regional committees are established in each province, city, and county to systematically coordinate planning work. These committees do not belong to any regional organization and are directly supervised by the State Planning Committee. As a result of a reorganization in 1969, they are separated into provincial planning committees, city/county committees, and enterprise committees (for large-scale enterprises). The planning committees, under the auspices of the State Planning Committee, coordinate their work with the planning offices of the economy-related government organizations the corresponding regional and local areas. The system attempts to enable the regional planning staff to better coordinate with economic establishments in their areas, which are directly responsible to them in planning, as well as communicating directly with staff at the CPC. "Detailed planning" seeks to construct plans with precision and scientific methods based on concrete assessment of the resources, labour, funds, plant capacities, and other necessary information. There are four stages in drafting the final national economic plan. The plan then becomes legal and compulsory. Frequent directives from the central government contain changes in the plan targets or incentives for meeting the plan objectives. Although the central government is most clearly involved in the formulation and evaluation of the yearly and long-term plans, it also reviews summaries of quarterly or monthly progress. Individual enterprises divide the production time into daily, weekly, ten-day, monthly, quarterly, and annual periods. In general, the monthly plan is the basic factory planning period. The success of an economic plan depends on the quality and detail of information received, the establishment of realistic targets, coordination among sectors, and correct implementation. High initial growth during the Three-Year Plan and, to a lesser extent, during the Five-Year Plan contributed to a false sense of confidence among the planners. Statistical over reporting—an inherent tendency in an economy where rewards lie in fulfilling the quantitative targets, particularly when the plan target year approaches—leads to overestimation of economic potential, poor product quality, and eventually to plan errors. Inefficient use of plants, equipment, and raw materials add to planning errors. Lack of coordination in planning and production competition among sectors and regions cause imbalances and disrupt input-output relationships. The planning reforms in 1964 were supposed to solve these problems, but the need for correct and detailed planning and strict implementation of plans was so great that their importance was emphasized in the report unveiling the Second Seven-Year Plan, indicating that planning problems persisted in the 1980s. In the mid-1990s North Korea abandoned firm directive planning, and multi-year plans became more of a long-term economic strategy. In 2016, Kim Jong Un announced the first "Five Year Plan" since the 1980s, which aims to further develop the economy. The "Ch'ŏngsan-ni Method" () of management was born out of Kim Il-sung's February 1960 visit to the Ch'ŏngsan-ni Cooperative Farm in South P'yŏngan Province. Influenced by Mao Zedong's Great Leap Forward Policy, Kim and other members of the KWP Central Committee offered "on-the-spot guidance" (, "hyŏnji chido") and spent two months instructing and interacting with the workers. The avowed objective of this new method is to combat "bureaucratism" and "formalism" in the farm management system. The leadership claimed that farm workers were unhappy and produced low output because low-ranking functionaries of the Workers' Party of Korea (who expounded abstract Marxist theories and slogans) were using tactics that failed to motivate. To correct this, the leadership recommended that the workers receive specific guidance in solving production problems and be promised readily available material incentives. The Ch'ŏngsan-ni Method called for high-ranking party officials, party cadres, and administrative officials to emulate Kim Il-sung by making field inspections. The system provided opportunities for farmers to present their grievances and ideas to leading cadres and managers. Perhaps more important than involving administrative personnel in on-site inspections was the increased use of material incentives, such as paid vacations, special bonuses, honorific titles, and monetary rewards. In fact, the Ch'ŏngsan-ni Method appeared to accommodate almost any expedient to spur production. The method, subsequently, was undercut by heavy-handed efforts to increase farm production and amalgamate farms into ever-larger units. Actual improvement in the agricultural sector began with the adoption of the subteam contract system as a means of increasing peasant productivity by adjusting individual incentives to those of the immediate, small working group. Thus the increasing scale of collective farms was somewhat offset by the reduction in the size of the working unit. "On-the-spot guidance" by high government functionaries, however, continued in the early 1990s, as exemplified by Kim Il-sung's visits to such places as the Wangjaesan Cooperative Farm in Onsŏng County and the Kyŏngsŏn Branch Experimental Farm of the Academy of Agricultural Sciences between August 20 and 30, 1991. Kim Jong-il carried on the tradition, despite having refused to do so before, and even expanded it to the Korean People's Army. Today Kim Jong-un continues the practices of the method. The industrial management system developed in three distinct stages. The first was a period of enterprise autonomy that lasted until December 1946. The second stage was a transitional system based on local autonomy, with each enterprise managed by the enterprise management committee under the direction of the local people's committee. This system was replaced by the "one-man management system" (), with management patterned along Soviet lines as large enterprises were nationalized and came under central control. The third stage, the "Taean" Work System (, "Taeanŭi saŏpch'e"), was introduced in December 1961 as an application and refinement of agricultural management techniques to industry. The Taean industrial management system grew out of the "Ch'ŏngsan-ni" Method. The highest managerial authority under the Taean system is the party committee. Each committee has approximately 25 to 35 members elected from the ranks of managers, workers, engineers, and the leadership of "working people's organizations" at the factory. A smaller "executive committee", about one-quarter the size of the regular committee, has practical responsibility for day-to-day plant operations and major factory decisions. The most important staff members, including the party committee secretary, factory manager, and chief engineer, make up its membership. The system focuses on co-operation among workers, technicians, and party functionaries at the factory level. Each factory has two major lines of administration, one headed by the manager, the other by the party committee secretary. A chief engineer and his or her assistants direct a general staff in charge of all aspects of production, planning, and technical guidance. Depending on the size of the factory, varying numbers of deputies oversee factory logistics, marketing, and workers' services. The supply of materials includes securing, storing, and distributing all materials for factory use, as well as storing finished products and shipping them from the factory. Deputies are assigned workers to their units and handle factory accounts and payroll. Providing workers' services requires directing any farming done on factory lands, stocking factory retail shops, and taking care of all staff amenities. Deputies in charge of workers' services are encouraged to meet as many of the factory's needs as possible using nearby agricultural cooperatives and local industries. The secretary of the party committee organizes all political activities in each of the factory party cells and attempts to ensure loyalty to the party's production targets and management goals. According to official claims, all management decisions are arrived at by consensus among the members of the party committee. Given the overwhelming importance of the party in the country's affairs, it seems likely that the party secretary has the last say in any major factory disputes. The Taean system heralded a more rational approach to industrial management than that practised previously. Although party functionaries and workers became more important to management under the new system, engineers and technical staff received more responsibility in areas where their expertise could contribute the most. The system recognizes the importance of material as well as "politico-moral" incentives for managing the factory workers. The "internal accounting system", a spin-off of the "independent accounting system", grants bonuses to work teams and workshops that use raw materials and equipment most efficiently. These financial rewards come out of enterprise profits. A measure of the success of the Taean work system is its longevity and its continued endorsement by the leadership. In his 1991 New Year's address marking the 13th anniversary of the creation of the system, Kim Il-sung said that the Taean work system is the best system of economic management. It enables the producer masses to fulfill their responsibility and role as masters and to manage the economy in a scientific and rational manner by implementing the mass line in economic management, and by combining party leadership organically with administrative, economic, and technical guidance. Parallel to management techniques such as the Ch'ŏngsan-ni Method and the Taean work system, which were designed to increase output in more normalized and regularized operations of farms and enterprises, the leadership continuously resorts to exhortations and mass campaigns to motivate the workers to meet output targets. The earliest and the most pervasive mass production campaign was the Ch'ŏllima Movement. Introduced in 1958 and fashioned after China's Great Leap Forward (1958–1960), the Ch'ŏllima Movement organized the labour force into work teams and brigades to compete at increasing production. The campaign was aimed at industrial and agricultural workers and at organizations in education, science, sanitation and health, and culture. In addition to work teams, units eligible for Ch'ŏllima citations included entire factories, factory workshops, and such self-contained units as a ship or a railroad station. The "socialist emulation" among the industrial sectors, enterprises, farms, and work teams under the Ch'ŏllima Movement frantically sought to complete the First Five-Year Plan (1957–1960) but instead created chaotic disruptions in the economy. This made it necessary to set aside 1959 as a "buffer year" to restore balance in the economy. Although the Ch'ŏllima Movement was replaced in the early 1960s by the Ch'ŏngsan-ni Method and the Taean Work System, the regime's reliance on mass campaigns continued into the early 1990s. Campaigns conducted after the Ch'ŏllima to speed battles toward the end of a period (such as a month, a year, or an economic plan) to reach production targets to carry out the economic goals of the decade. Following the collapse of the Soviet Union in 1991, the principal source of external support, North Korea announced in December 1993 a three-year transitional economic policy placing primary emphasis on agriculture, light industry, and foreign trade. However, lack of fertilizer, natural disasters, and poor storage and transportation practices the country fell more than a million tons per year short of grain self-sufficiency. Moreover, lack of foreign exchange to purchase spare parts and oil for electricity generation left many factories idle. The shortage of foreign exchange because of a chronic trade deficit, a large foreign debt, and dwindling foreign aid has constrained economic development. In addition, North Korea has been diverting scarce resources from developmental projects to defence; it spent more than 20% of GNP on defence toward the end of the 1980s, a proportion among the highest in the world. These negative factors, compounded by the declining efficiency of the central planning system and the failure to modernize the economy, have slowed the pace of growth since the 1960s. The demise of the communist regimes in the Soviet Union and East European countries—North Korea's traditional trade partners and benefactors—has compounded the economic difficulties in the early 1990s. Economically, the collapse of the Soviet Union and the end of Soviet support to North Korean industries caused a contraction of the North Korea's economy by 25% during the 1990s. While, by some accounts, North Korea had a higher per capita income than South Korea in the 1970s, by 2006 its per capita income was estimated to be only $1108, one seventeenth that of South Korea. Experimentation in small scale entrepreneurship took place from 2009 to 2013, and although there continue to be legal uncertainties this has developed into a significant sector. By 2016 economic liberalisation had progressed to the extent that both locally-responsible and state industrial enterprises gave the state 20% to 50% of their output, selling the remainder to buy raw materials with market-based prices in akin to a free market. In 2014 the Enterprise Act was amended to allow state-owned enterprise managers to engage in foreign trade and joint ventures, and to accept investment from non-government domestic sources. Under the new rules the enterprise director became more like the western chief executive officer, and the chief engineer had an operational role more like a western chief operating officer. As of 2017 it was unclear if the Taean Work System (described above) still in practice operated to give local people's committees much influence. In 2017 Dr. Mitsuhiro Mimura, Senior Research Fellow at Japan's Economic Research Institute for Northeast Asia, who has visited North Korea 45 times, described it as the "poorest advanced economy in the world", in that while having comparatively low GDP, it had built a sophisticated production environment. He described the recent rise of entrepreneurial groups through "socialist cooperation", where groups of individuals could start small enterprises as cooperative groups. Managers in state-owned industries or farms were also free to sell or trade production beyond state plan targets, providing incentives to increase production. Managers could also find investment for expansion of successful operations, in a process he called "socialist competition". A state plan was still the basis for production, but was more realistic leaving room for excess production. The state budget is a major government instrument in carrying out the country's economic goals. Expenditures represented about three-quarters of GNP in the mid-1980s, the allocation of which reflected the priorities assigned to different economic sectors. Taxes were abolished in 1974 as "remnants of an antiquated society". This action, however, was not expected to have any significant effect on state revenue because the overwhelming proportion of government funds—an average of 98.1% during 1961–1970—was from turnover (sales) taxes, deductions from profits paid by state enterprises, and various user fees on machinery and equipment, irrigation facilities, television sets, and water. In order to provide a certain degree of local autonomy as well as to lessen the financial burden of the central government, a "local budget system" was introduced in 1973. Under this system, provincial authorities are responsible for the operating costs of institutions and enterprises not under direct central government control, such as schools, hospitals, shops, and local consumer goods production. In return, they are expected to organize as many profitable ventures as possible and to turn over profits to the central government. Around December of every year, the state budget for the following calendar year is drafted, subject to revision around March. Typically, total revenue exceeds expenditure by a small margin, with the surplus carried over to the following year. The largest share of state expenditures goes to the "people's economy", which averaged 67.3% of total expenditures between 1987 and 1990, followed in magnitude by "socio-cultural", "defense", and "administration". Defense spending, as a share of total expenditures, has increased significantly since the 1960s: from 3.7% in 1959 to 19% in 1960, and, after averaging 19.8% between 1961 and 1966, to 30.4% in 1967. After remaining around 30% until 1971, the defense share decreased abruptly to 17% in 1972, and continued to decline throughout the 1980s. Officially, in both 1989 and 1990 the defense share remained at 12%, and for 1991 it was 12.3% with 11.6% planned for 1992. The declining trend was consistent with the government's announced intentions to stimulate economic development and increase the social benefits. However, Western experts have estimated that actual military expenditures are higher than budget figures indicate. In the 1999 budget, expenditures for the farming and power sectors were increased by 15% and 11%, respectively, compared with those of 1998. In the 2007 budget, it was estimated an increase in revenue at 433.2bn won ($3.072bn, $1 = 141 won). In 2006, 5.9% were considered the public revenue, whereas this year, this figure was raised to 7.1%. North Korea claims that it is the only state in the world that does not levy taxes. Taxes were abolished beginning on April 1, 1974. Since 2003, North Korean authorities issue government bonds called The "People's Life Bonds", and promoted the slogan "Buying bonds is patriotic". North Korea sold bonds internationally in the late 1970s for 680 million Deutsche marks and 455 million Swiss francs. North Korea defaulted on these bonds by 1984, although the bonds remain traded internationally on speculation that the country would eventually perform on the obligations. "The Sydney Morning Herald" reported that Kim’s previous propaganda was changed into patriotism and economy, and in improving the relationship between China, South Korea, and the United States. The state-run television promoted a song of praise to the National flag by airing videos with images that included the flag being raised September 2018, during mass games events, marking North Korea's 70th anniversary. In the video, brief images of troops, fighter jets releasing blue, red, and white smoke, scattered pictures of civilians, new high-rise apartments in the capital, fireworks displays, and even students in their school uniforms can all be seen at the same event. The "South China Morning Post", in a 2019 article, stated that already there is also some economical and cultural revolution happening recently within North Korea itself. It started in earnest in February 2018, during the Pyeongchang Winter Olympic Games, when top musicians from North Korea were sent to perform in South Korea. This included a female quintet who performed in black shorts and red tops. After two months, Supreme Leader Kim Jong-un saw the performance of South Korean girl group, Red Velvet. This is the first ever K-Pop show to be held in Pyongyang. The North Korean musicians that performed in South Korea were highly praised for their performance that leader Kim decided to send them to Beijing for another goodwill tour in January, 2019. Part of the revolution was the introduction of other cultures, including Western, which was previously believed to be vulgar and quite corrupt in the past, but is now slowly making its way to the North Korean people. Second-hand Harry Potter books can now be read at the National Library, and Bollywood films like the "Three Idiots" had just had a run in their cinemas. The changes have also found their way to the economic sector with factories that are also producing products that are associated more with the West, like Air Jordan shoes, for national consumption. Per the amendments made to the Constitution in 2019, the former economic methods of management, Ch'ŏngsan-ni in agriculture and Taean in the industries, were now phased out altogether. North Korea also implements planned economy in industry. The government will provide fuel and materials for the factory, and the factory will manufacture the required products and quantities according to the government's requirements. North Korea's self-reliant development strategy assigned top priority to developing heavy industry, with parallel development in agriculture and light industry. This policy was achieved mainly by giving heavy industry preferential allocation of state investment funds. More than 50% of state investment went to the industrial sector during the 1954–1976 period (47.6%, 51.3%, 57.0%, and 49.0%, respectively, during the Three-Year Plan, Five-Year Plan, First Seven-Year Plan, and Six-Year Plan). As a result, gross industrial output grew rapidly. As was the case with the growth in national output, the pace of growth has slowed markedly since the 1960s. The rate declined from 41.7% and 36.6% a year during the Three-Year Plan and Five-Year Plan, respectively, to 12.8%, 16.3%, and 12.2%, respectively, during the First Seven Year Plan, Six-Year Plan, and Second Seven-Year Plan. As a result of faster growth in industry, that sector's share in total national output increased from 16.8% in 1946 to 57.3% in 1970. Since the 1970s, industry's share in national output has remained relatively stable. From all indications, the pace of industrialization during the Third Seven-Year Plan up to 1991 is far below the planned rate of 9.6%. In 1990 it was estimated that the industrial sector's share of national output was 56%. Industry's share of the combined total of gross agricultural and industrial output climbed from 28% in 1946 to well over 90% in 1980. Heavy industry received more than 80% of the total state investment in industry between 1954 and 1976 (81.1%, 82.6%, 80%, and 83%, respectively, during the Three-Year Plan, Five-Year Plan, First Seven-Year Plan, and Six-Year Plan), and was overwhelmingly favored over light industry. North Korea claims to have fulfilled the Second Seven-Year Plan (1978–1984) target of raising the industrial output in 1984 to 120% of the 1977 target, equivalent to an average annual growth rate of 12.2%. Judging from the production of major commodities that form the greater part of industrial output, however, it is unlikely that this happened. For example, the increase during the 1978–1984 plan period for electric power, coal, steel, metal-cutting machines, tractors, passenger cars, chemical fertilizers, chemical fibers, cement, and textiles, respectively, was 78%, 50%, 85%, 67%, 50%, 20%, 56%, 80%, 78%, and 45%. Raw materials were in short supply and so were energy and hard currency. Infrastructure decayed and machinery became obsolete. Unlike other socialist countries in the Eastern Europe, North Korea kept planning in a highly centralized manner and refused to liberalize economic management. In the mid-1980s, the speculation that North Korea would emulate China in establishing Chinese-style special economic zones was flatly denied by then deputy chairman of the Economic Policy Commission Yun Ki-pok (Yun became chairman as of June 1989). China's special economic zones typically are coastal areas established to promote economic development and the introduction of advanced technology through foreign investment. Investors are offered preferential tax terms and facilities. The zones, which allow greater reliance on market forces, have more decision making power in economic activities than do provincial-level units. Over the years, China has tried to convince the North Korean leadership of the advantages of these zones by giving tours of the various zones and explaining their values to visiting high-level officials. In April 1982, Kim Il-sung announced a new economic policy giving priority to increased agricultural production through land reclamation, development of the country's infrastructure—especially power plants and transportation facilities—and reliance on domestically produced equipment. There also was more emphasis on trade. In September 1984, North Korea promulgated a joint venture law to attract foreign capital and technology. The new emphasis on expanding trade and acquiring technology was not, however, accompanied by a shift in priorities away from support of the military. In 1991, North Korea announced the creation of a Special Economic Zone (SEZ) in the northeast regions of Rason (Rason Special Economic Zone) and Ch'ŏngjin. Investment in this SEZ has been slow in coming. Problems with infrastructure, bureaucracy, uncertainties about the security of investments, and viability have hindered growth and development. Nevertheless, thousands of small Chinese businesses had set up profitable operations in North Korea by 2011. A government research center, the Korea Computer Center, was set up in 1990, starting the slow development of an information technology industry. In 2013 and 2014, the State Economic Development Administration announced a number of smaller special economic zones covering export handling, mineral processing, high technology, gaming and tourism. The most successful export industry is the garment industry. Production is by a North Korean firm for a European or other foreign partner, by a Chinese firm operating in North Korea with a North Korean partner, or by North Korean workers working in Chinese or other foreign factories. Wages are the lowest in northeastern Asia. The North Korean motor vehicle production establishes military, industrial and construction goals, with private car ownership by citizens remaining on low demand. Having Soviet origins (the subsequent practice of cloning foreign specimens, and a recent automobile joint-venture), North Korea has developed a wide-range automotive industry with production of all types of vehicles. The basis for production is in urban and off-road minis; luxury cars; SUV cars; small, medium, heavy, and super-heavy cargo; construction and off-road trucks; minibuses/minivans, coach buses, civilian and articulated buses, trolleybuses, and trams. However, North Korea produces far fewer vehicles than its production capability due to the ongoing economic crisis. North Korea has not joined or collaborated with the OICA, or with any other automotive organization, so any critical information about its motor vehicle industry is limited. The energy sector is one of the most serious bottlenecks in the North Korean economy. Since 1990, the supply of oil, coal, and electricity declined steadily, and seriously affected all sectors of the economy. Crude oil was formerly imported by pipeline at "friendship prices" from the former Soviet Union or China, but the withdrawal of Russian concessions and the reduction of imports from China brought down annual imports from about in 1988 to less than by 1997. As the imported oil was refined for fuels for transportation and agricultural machinery, a serious cutback in oil imports caused critical problems in transportation and agriculture. According to statistics compiled by the South Korean agency Statistics Korea based on International Energy Agency (IEA) data, per capita electricity consumption fell from its peak in 1990 of 1247 kilowatt hours to a low of 712 kilowatt hours in 2000. It slowly rose since then to 819 kilowatt hours in 2008, a level below that of 1970. North Korea has no coking coal, but has substantial reserves of anthracite in Anju, Aoji (Ŭndŏk), and other areas. Coal production peaked at 43 million tons in 1989 and steadily declined to 18.6 million tons in 1998. Major causes of coal shortages include mine flooding, and outdated mining technology. As coal was used mainly for industry and electricity generation, decrease in coal production caused serious problems in industrial production and electricity generation. Coal production may not necessarily increase significantly until North Korea imports modern mining technology. Electricity generation of North Korea peaked in 1989 at about 30 TWh. There were seven large hydroelectric plants in the 1980s. Four were along the Yalu River, built with Chinese aid, and supplying power to both countries. In 1989, 60% of electricity generation was hydroelectric and 40% fossil fueled, mostly coal-fired. In 1997, coal accounted for more than 80% of primary energy consumption and hydro power more than 10%. Net imports of coal represented only about 3% of coal consumption. Hydroelectric power plants generated about 65% of North Korea's electricity and coal-fired thermal plants about 35% in 1997. However, with only 20% of the per capita electricity generation of Japan, North Korea suffered from chronic supply shortages. Coal exports to China currently account for a major portion of North Korea's revenue. Some hydroelectric facilities were believed to be out of operation due to damage from major flooding in 1995. Coal-fired power plants were running well under capacity, due in part to a serious decline in coal supply and in part to problems with transportation of coal. The electricity supply steadily declined and was 17 TWh in 1998. Since electricity generated needed to be doubled just to return to the 1989 level, power shortages continued until coal production could increase substantially and generating equipment is refurbished. Transmission losses were reported to be around 30%. Construction has been an active sector in North Korea. This was demonstrated not only through large housing programmes, of which most were visible in the high-rise apartment blocks in Pyongyang, but also in the smaller modern apartment complexes widespread even in the countryside. These are dwarfed in every sense by "grand monumental edifices". The same may apply even to apparently economically useful projects such as the Nampo Dam, which cost US$4bn. The years of economic contraction in the 1990s slowed this sector as it did others; the shell of the 105-story Ryugyŏng Hotel towered unfinished on Pyongyang's skyline for over a decade. The Bank of Korea claims that construction's share of GDP fell by almost one-third between 1992 and 1994, from 9.1% to 6.3%. This accords with a rare official figure of 6% for 1993, when the sector was said to have employed 4.2% of the labour force. However, the latter figure excludes the Korean People's Army, which visibly does much of the country's construction work. Since about 2012, when 18 tower blocks were built in Pyongyang, a construction boom has taken place in Pyongyang. Major projects include the Mansudae People's Theatre (2012), Munsu Water Park (2013), the modernisation of Pyongyang Sunan International Airport (2015) and the Science and Technology Center (2015). The Central Bank of North Korea, under the Ministry of Finance, has a network of 227 local branches. Several reissues of banknotes in recent years suggest that citizens are inclined to hoard rather than bank any savings that they make from their incomes; reportedly they now also prefer foreign currency. At least two foreign aid agencies have recently set up microcredit schemes, lending to farmers and small businesses. In late 2009, North Korea revalued its currency, effectively confiscating all privately held money above the equivalent of US$35 per person. The revaluation effectively wiped out the savings of many North Koreans. Days after the revaluation the won dropped 96% against the United States dollar. Pak Nam-gi, the director of the Planning and Finance Department of North Korea's ruling Workers' Party, was blamed for the disaster and later executed in 2010. In 2004 and 2006 laws were passed to codify rules for savings and commercial banking. However it was not until 2012 that North Korean banks started to seriously compete for retail customers. Competing electronic cash cards have become widely accepted in Pyongyang and other cities, but are generally not linked to bank accounts. North Korean banks have introduced retail products which permit a mobile phone app to make payments and top-ups. As of May 2013, the Chinese banks, China Merchants Bank, Industrial and Commercial Bank of China, China Construction Bank, and Agricultural Bank of China, stopped "all cross-border cash transfers, regardless of the nature of the business" with North Korea. The Bank of China, the China's primary institution for foreign exchange transactions, said, on May 14, 2013, that "it had closed the account of Foreign Trade Bank, North Korea's main foreign exchange bank". However, "smaller banks based in northeastern China across the border from North Korea said it was still handling large-scale cross-border transfers." For example, the Bank of Dalian branch in Dandong was still doing transfers to North Korea. Until the early 2000s the official retail sector was mainly state-controlled, under the direction of the People's Services Committee. Consumer goods were few and of poor quality, with most provided on a ration basis. There were state-run stores and direct factory outlets for the masses, and special shops with luxuries for the elite—as well as a chain of hard-currency stores (a joint venture with the association of pro-Pyongyang Korean residents in Japan, the Ch'ongryŏn), with branches in large cities. In 2002 and in 2010, private markets were progressively legalized, mostly for food sales. As of 2013, urban and farmer markets were held every 10 days, and most urban residents lived within 2 km of a market. In 2012, the third large shopping mall in Pyongyang, the Kwangbok Area Shopping Center, opened. In 2014 the construction of another large shopping mall started. As of 2017, these malls sold competing brands of goods, for example at least ten different kinds of toothpaste were being sold. In 2017, the Korea Institute for National Unification estimated there were 440 government-approved markets employing about 1.1 million people. North Korea's sparse agricultural resources limit agricultural production. Climate, terrain, and soil conditions are not particularly favorable for farming, with a relatively short cropping season. Only about 17% of the total landmass, or approximately , is arable, of which is well suited for cereal cultivation; the major portion of the country is rugged mountain terrain. The weather varies markedly according to elevation, and lack of precipitation, along with infertile soil, makes land at elevations higher than 400 meters unsuitable for purposes other than grazing. Precipitation is geographically and seasonally irregular, and in most parts of the country as much as half the annual rainfall occurs in the three summer months. This pattern favors the cultivation of paddy rice in warmer regions that are outfitted with irrigation and flood control networks. Rice yields are 5.3 tonnes per hectare, close to international norms. In 2005, North Korea was ranked by the FAO as an estimated 10th in the production of fresh fruit and as an estimated 19th in the production of apples. Farming is concentrated in the flatlands of the four west coast provinces, where a longer growing season, level land, adequate rainfall, and good irrigated soil permit the most intensive cultivation of crops. A narrow strip of similarly fertile land runs through the eastern seaboard Hamgyŏng provinces and Kangwŏn Province, but the interior provinces of Chagang and Ryanggang are too mountainous, cold, and dry to allow much farming. The mountains contain the bulk of North Korea's forest reserves while the foothills within and between the major agricultural regions provide lands for livestock grazing and fruit tree cultivation. Since self-sufficiency remains an important pillar of North Korean ideology, self-sufficiency in food production is deemed a worthy goal. Another aim of government policies—to reduce the gap between urban and rural living standards—requires continued investment in the agricultural sector. The stability of the country depends on steady, if not rapid, increases in the availability of food items at reasonable prices. In the early 1990s, there were severe food shortages. The most far-reaching statement on agricultural policy is embodied in Kim Il-sung's 1964 "Theses on the Socialist Agrarian Question in Our Country", which underscores the government's concern for agricultural development. Kim emphasized technological and educational progress in the countryside as well as collective forms of ownership and management. As industrialization progressed, the share of agriculture, forestry, and fisheries in the total national output declined from 63.5% and 31.4%, respectively, in 1945 and 1946, to a low of 26.8% in 1990. Their share in the labor force also declined from 57.6% in 1960 to 34.4% in 1989. In the 1990s, the decreasing ability to carry out mechanized operations (including the pumping of water for irrigation), as well as lack of chemical inputs, was clearly contributing to reduced yields and increased harvesting and post-harvest losses. Incremental improvements in agricultural production have been made since the late 1990s, bringing North Korea close to self-sufficiency in staple foods by 2013. In particular, rice yields have steadily improved, though yields on other crops have generally not improved. The production of protein foods remains inadequate. Access to chemical fertilizer has declined, but the use of compost and other organic fertilizer has been encouraged. North Korean fisheries export seafood, primarily crab, to Dandong, Liaoning, illicitly. Crabs, clams and conches from the Yellow Sea waters of North Korea are popular in China, possibly because the less salty water improves taste. Since the 1950s, a majority of North Koreans have received their food through the public distribution system (PDS). The PDS requires farmers in agricultural regions to hand over a portion of their production to the government and then reallocates the surplus to urban regions, which cannot grow their own foods. About 70% of the North Korean population, including the entire urban population, receives food through this government-run system. Before the floods, recipients were generally allotted 600–700 grams per day while high officials, military men, heavy laborers, and public security personnel were allotted slightly larger portions of 700–800 grams per day. As of 2013, the target average distribution was 573 grams of cereal equivalent per person per day, but varied according to age, occupation, and whether rations are received elsewhere (such as school meals). However, as of 2019, this number has been reduced to 312 grams per day according to an investigation conducted by the United Nations between March 29 and April 12. Decreases in production affected the quantity of food available through the public distribution system. Shortages were compounded when the North Korean government imposed further restrictions on collective farmers. When farmers, who had never been covered by the PDS, were mandated by the government to reduce their own food allotments from 167 kilograms to 107 kilograms of grain per person each year, they responded by withholding portions of the required amount of grain. Famine refugees reported that the government decreased PDS rations to 150 grams in 1994 and to as low as 30 grams by 1997. It was further reported that the PDS failed to provide any food from April to August 1998 (the "lean" season) as well as from March to June 1999. In January 1998, the North Korean government publicly announced that the PDS would no longer distribute rations and that families needed to somehow procure their own food supplies. By 2005, the PDS was only supplying households with approximately one half of an absolute minimum caloric need. By 2008, the system had significantly recovered, and, from 2009 to 2013, daily per person rations averaged at 400 grams per day for much of the year, though in 2011 it dropped to 200 grams per day from May to September. It is estimated that in the early 2000s, the average North Korean family drew some 80% of its income from small businesses that were technically illegal (though unenforced) in North Korea. In 2002 and in 2010, private markets were progressively legalized. As of 2013, urban and farmer markets were held every 10 days, and most urban residents lived within 2 km of a market, with markets having an increasing role in obtaining food. From 1994 to 1998, North Korea suffered a famine. Since North Korea is a closed country, the number of specific deaths in the incident is difficult to know. According to different literature, the starved or malnourished death toll is estimated to be between 240,000 and 480,000. Since 1998 there has been a gradual recovery in agriculture production, which by 2013 brought North Korea back close to self-sufficiency in staple foods. However, as of 2013, most households have borderline or poor food consumption, and consumption of protein remains inadequate. In the 1990s, the North Korean economy saw stagnation turning into crisis. Economic assistance received from the Soviet Union and China was an important factor of its economic growth. Upon its collapse in 1991, the Soviet Union withdrew its support and demanded payment in hard currency for imports. China stepped in to provide some assistance and supplied food and oil, most of it reportedly at concessionary prices. The North Korean economy was undermined and its industrial output began to decline in 1990. Deprived of industrial inputs, including fertilizers, pesticides, and electricity for irrigation, agricultural output also started to decrease even before North Korea had a series of natural disasters in the mid-1990s. This evolution, combined with a series of natural disasters including record floods in 1995, caused one of the worst economic crises in North Korea's history. Other causes of this crisis were high defense spending (about 25% of GDP) and bad governance. In December 1991, North Korea established a "zone of free economy and trade" to include the northeastern port cities of Unggi (Sŏnbong), Ch'ŏngjin, and Najin. The establishment of this zone also had ramifications on the questions of how far North Korea would go in opening its economy to the West and to South Korea, the future of the development scheme for the Tumen River area, and, more important, how much North Korea would reform its economic system. North Korea announced in December 1993 a three-year transitional economic policy placing primary emphasis on agriculture, light industry, and foreign trade. However, lack of fertilizer, natural disasters, and poor storage and transportation practices have left the country more than a million tons per year short of grain self-sufficiency. Moreover, lack of foreign exchange to purchase spare parts and oil for electricity generation left many factories idle. The 1990s famine paralyzed many of the Stalinist economic institutions. The government pursued Kim Jong-il's "Songun" policy, under which the military is deployed to direct production and infrastructure projects. As a consequence of the government's policy of establishing economic self-sufficiency, the North Korean economy has become increasingly isolated from that of the rest of the world, and its industrial development and structure do not reflect its international competitiveness. Domestic firms are shielded from international as well as domestic competition; the result is chronic inefficiency, poor quality, limited product diversity, and underutilization of plants. This protectionism also limits the size of the market for North Korean producers, which prevents taking advantage of economies of scale. The food shortage was primarily precipitated by the loss of fuel and other raw materials imports from China and the Soviet Union which had been essential to support an energy intensive and energy inefficient farming system. Following the collapse of the Soviet Union, the former concessional trade relationships which benefited the North Korea were not available. The three flood and drought years between 1994 and 1996 only served to complete the collapse of the agriculture sector. In 2004, more than half (57%) of the population did not have enough food to stay healthy. 37% of children had their growth stunted and of mothers severely lacked nutrition. In 2006, the World Food Program (WFP) and FAO estimated a requirement of 5.3 to 6.5 million tons of grain when domestic production fulfilled only 3.825 million tons. The country also faces land degradation after forests stripped for agriculture resulted in soil erosion. In 2008, a decade after the worst years of the famine, total production was 3.34 million tons (grain equivalent) compared with a need of 5.98 million tons. Thirty seven percent of the population was deemed to be insecure in food access. Weather continued to pose challenges every year, but overall food production grew gradually, and by 2013, production had increased to the highest level since the crisis, to 5.03 million tons cereal equivalent, against a minimum requirement of 5.37 MMT. In 2014 North Korea had an exceptionally good harvest, 5.08 million tonnes of cereal equivalent, almost sufficient to feed the entire population. While food production had recovered significantly since the hardest years of 1996 and 1997, the recovery was fragile, subject to adverse weather and year to year economic shortages. Distribution was uneven with the Public Distribution System largely ineffective. Any shortfall between production and need could be easily met by government funded imports, should the decision to make those purchases be made. North Korea now has in most years lower malnutrition levels than in some richer Asian countries. According to a 2012 report by South Korea-based North Korea Resource Institute (NKRI), North Korea has substantial reserves of iron ore, coal, limestone, and magnesite. In addition, North Korea is thought to have tremendous potential rare metal resources, which have been valued in excess of US$6 trillion. It is the world's 18th largest producer of iron and zinc, and has the 22nd largest coal reserves in the world. It is also the 15th largest fluorite producer and 12th largest producer of copper and salt in Asia. Other major natural resources in production include lead, tungsten, graphite, magnesite, gold, pyrites, fluorspar, and hydropower. In 2015, North Korea exported 19.7 million tonnes of coal, worth $1.06 billion, much of it to China. In 2016 it was estimated that coal shipments to China accounted for about 40% of exports. However, starting from February 2017 China suspended all North Korean coal imports, although according to China overall trade with North Korea increased. North Korea has a proficient information technology industry. In 2018, a technological exhibition unveiled a new wi-fi service called Mirae ("Future"), which allowed mobile devices to access the intranet network in Pyongyang. The exhibition also showcased a home automation system using speech recognition in Korean. North Korea's cartoon animation studios such as SEK Studio sub-contract work from South Korean animation studios. Mansudae Overseas Projects builds monuments around the world. North Korea's economy has been unique in its elimination of markets. By the 1960s, market elements had been suppressed almost completely. Almost all items, from food to clothes, have traditionally been handed out through a public distribution system, with money only having a symbolic meaning. Ratios of food depend on hierarchy in the system, wherein the positions seem to be semi-hereditary. Until the late 1980s, peasants were not allowed to cultivate private garden plots. Since the government is the dominant force in the development and management of the economy, bureaus and departments have proliferated at all administrative levels. There are fifteen committees—such as the agricultural and state planning committees—one bureau, and twenty departments under the supervision of the Cabinet; of these, twelve committees—one bureau, and sixteen departments are involved in economic management. In the early 1990s, several vice premiers of the then State Administration Council supervised economic affairs. Organizations undergo frequent reorganization. Many of these agencies have their own separate branches at lower levels of government while others maintain control over subordinate sections in provincial and county administrative agencies. Around 1990, with the collapse of the Soviet Union, restrictions on private sales, including grain, ceased to be enforced. It is estimated that in the early 2000s, the average North Korean family drew some 80% of its income from small businesses that were technically illegal (though unenforced) in North Korea. In 2002, and in 2010, private markets were progressively legalized. As of 2013, urban and farmer markets were held every 10 days, and most urban residents lived within 2 km of a market. In 2014, North Korea announced the "May 30th measures". These planned to give more freedom to farmers, allowing them to keep 60% of their produce. Also enterprise managers would be allowed to hire and fire workers, and decide whom they do business with and where they buy raw materials and spare parts. Some reports suggest that these measures would allow nominally state-run enterprises to be run on capitalist lines like those on China. North Korea, one of the world's most centrally planned and isolated economies, faces desperate economic conditions. Industrial capital stock is nearly beyond repair as a result of years of underinvestment and shortages of spare parts. Industrial and power output have declined in parallel. During what North Korea called the "peaceful construction" period before the Korean War, the fundamental task of the economy was to overtake the level of output and efficiency attained toward the end of the Japanese occupation; to restructure and develop a viable economy reoriented toward the communist-bloc countries; and to begin the process of socializing the economy. Nationalization of key industrial enterprises and land reform, both of which were carried out in 1946, laid the groundwork for two successive one-year plans in 1947 and 1948, respectively, and the Two-Year Plan of 1949–50. It was during this period that the piece-rate wage system and the independent accounting system began to be applied and that the commercial network increasingly came under state and cooperative ownership. The basic goal of the Three-Year Plan, officially named "The Three-Year Post-war Reconstruction Plan of 1954–56", was to reconstruct an economy torn by the Korean War. The plan stressed more than merely regaining the prewar output levels. The Soviet Union, other East European countries and China provided reconstruction assistance. The highest priority was developing heavy industry, but an earnest effort to collectivize farming also was begun. At the end of 1957, output of most industrial commodities had returned to 1949 levels, except for a few items such as chemical fertilizers, carbides, and sulfuric acid, whose recovery took longer. Having basically completed the task of reconstruction, the state planned to lay a solid foundation for industrialization while completing the socialization process and solving the basic problems of food and shelter during the Five-Year Plan of 1957–1960. The socialization process was completed by 1958 in all sectors of the economy, and the Ch'ŏllima Movement was introduced. Although growth rates reportedly were high, there were serious imbalances among the different economic sectors. Because rewards were given to individuals and enterprises that met production quotas, frantic efforts to fulfill plan targets in competition with other enterprises and industries caused disproportionate growth among various enterprises, between industry and agriculture and between light and heavy industries. Because resources were limited and the transportation system suffered bottlenecks, resources were diverted to politically well-connected enterprises or those whose managers complained the loudest. An enterprise or industry that performed better than others often did so at the expense of others. Such disruptions intensified as the target year of the plan approached. Until the 1960s, North Korea's economy grew much faster than South Korea's. Although North Korea was behind in total national output, it was ahead of South Korea in per capita national output, because of its smaller population relative to South Korea. For example, in 1960 North Korea's population was slightly over 10 million people, while South Korea's population was almost 25 million people. Annual economic growth rates of 30% and 21% during the Three-Year Plan of 1954–1956 and the Five-Year Plan of 1957–1960, respectively, were reported. After claiming early fulfillment of the Five-Year Plan in 1959, North Korea officially designated 1960 a "buffer year"—a year of adjustment to restore balances among sectors before the next plan became effective in 1961. Not surprisingly the same phenomenon recurred in subsequent plans. Because the Five-Year Plan was fulfilled early, it became a de facto four-year plan. Beginning in the early 1960s, however, North Korea's economic growth slowed until it was stagnant at the beginning of the 1990s. Various factors explain the very high rate of economic development of the country in the 1950s and the general slowdown since the 1960s. During the reconstruction period after the Korean War, there were opportunities for extensive economic growth—attainable through the communist regime's ability to marshall idle resources and labor and to impose a low rate of consumption. This general pattern of initially high growth resulting in a high rate of capital formation was mirrored in other Soviet-type economies. Toward the end of the 1950s, as reconstruction work was completed and idle capacity began to diminish, the economy had to shift from the extensive to the intensive stage, where the simple communist discipline of marshaling underutilized resources became less effective. In the new stage, inefficiency arising from emerging bottlenecks led to diminishing returns. Further growth would only be attained by increasing efficiency and technological progress. Beginning in the early 1960s, a series of serious bottlenecks began to impede development. Bottlenecks were pervasive and generally were created by the lack of arable land, skilled labor, energy, and transportation, and deficiencies in the extractive industries. Moreover, both land and marine transportation lacked modern equipment and modes of transportation. The inability of the energy and extractive industries as well as of the transportation network to supply power and raw materials as rapidly as the manufacturing plants could absorb them began to slow industrial growth. The First Seven-Year Plan (initially 1961–1967) built on the groundwork of the earlier plans but changed the focus of industrialization. Heavy industry, with the machine tool industry as its linchpin, was given continuing priority. During the plan, however, the economy experienced widespread slowdowns and reverses for the first time, in sharp contrast to the rapid and uninterrupted growth during previous plans. Disappointing performance forced the planners to extend the plan three more years, until 1970. During the last part of the "de facto" ten-year plan, emphasis shifted to pursuing parallel development of the economy and of defense capabilities. This shift was prompted by concern over the military takeover in South Korea by General Park Chung-hee (1961–1979), escalation of the United States involvement in Vietnam, and the widening Sino-Soviet split. It was thought that stimulating a technological revolution in the munitions industry was one means to achieve these parallel goals. In the end, the necessity to divert resources to defense became the official explanation for the plan's failure. The Six-Year Plan of 1971–1976 followed immediately in 1971. In the aftermath of the poor performance of the preceding plan, growth targets of the Six-Year Plan were scaled down substantially. Because some of the proposed targets in the First Seven-Year Plan had not been attained even by 1970, the Six-Year Plan did not deviate much from its predecessor in basic goals. The Six-Year Plan placed more emphasis on technological advance, self-sufficiency ("Juche") in industrial raw materials, improving product quality, correcting imbalances among different sectors, and developing the power and extractive industries; the last of these had been deemed largely responsible for slowdowns during the First Seven-Year Plan. The plan called for attaining a self- sufficiency rate of 60–70% in all industrial sectors by substituting domestic raw materials wherever possible and by organizing and renovating technical processes to make the substitution feasible. Improving transport capacity was seen as one of the urgent tasks in accelerating economic development—it was one of the major bottlenecks of the Six-Year Plan. North Korea claimed to have fulfilled the Six-Year Plan by the end of August 1975, a full year and four months ahead of schedule. Under the circumstances, it was expected that the next plan would start without delay in 1976, a year early, as was the case when the First Seven-Year Plan was instituted in 1961. Even if the Six-Year Plan had been completed on schedule, the next plan should have started in 1977. However, it was not until nearly two years and four months later that the long-awaited plan was unveiled—1977 had become a "buffer year". The inability of the planners to continuously formulate and institute economic plans reveals as much about the inefficacy of planning itself as the extent of the economic difficulties and administrative disruptions facing the country. For example, targets for successive plans have to be based on the accomplishments of preceding plans. If these targets are underfulfilled, all targets of the next plan—initially based on satisfaction of the plan—have to be reformulated and adjusted. Aside from underfulfillment of the targets, widespread disruptions and imbalances among various sectors of the economy further complicate plan formulation. The basic thrust of the Second Seven-Year Plan (1978–1984) was to achieve the three-pronged goals of self-reliance, modernization, and "scientification". Although the emphasis on self-reliance was not new, it had not previously been the explicit focus of an economic plan. This new emphasis might have been a reaction to mounting foreign debt originating from large-scale imports of Western machinery and equipment in the mid-1970s. Through modernization North Korea hoped to increase mechanization and automation in all sectors of the economy. "Scientification" means the adoption of up-to-date production and management techniques. The specific objectives of the economic plan were to strengthen the fuel, energy, and resource bases of industry through priority development of the energy and extractive industries; to modernize industry; to substitute domestic resources for certain imported raw materials; to expand freight-carrying capacity in railroad, road, and marine transportation systems; to centralize and containerize the transportation system; and to accelerate a technical revolution in agriculture. In order to meet the manpower and technology requirements of an expanding economy, the education sector also was targeted for improvements. The quality of the comprehensive eleven-year compulsory education system was to be enhanced to train more technicians and specialists, and to expand the training of specialists, particularly in the fields of fuel, mechanical, electronic, and automation engineering. Successful fulfillment of the so-called nature-remaking projects also was part of the Second Seven-Year Plan. These projects referred to the five-point program for nature transformation unveiled by Kim Il-sung in 1976: completing the irrigation of non-paddy fields; reclaiming 1,000 square kilometres of new land; building 1,500 to 2,000 km of terraced fields; carrying out afforestation and water conservation work; and reclaiming tidal land. From all indications, the Second Seven-Year Plan was not successful. North Korea generally downplayed the accomplishments of the plan, and no other plan received less official fanfare. It was officially claimed that the economy had grown at an annual rate of 8.8% during the plan, somewhat below the planned rate of 9.6%. The reliability of this aggregate measure, however, is questionable. During the plan, the target annual output of 10 million tons of grains (cereals and pulses) was attained. However, by official admission, the targets of only five other commodities were fulfilled. Judging from the growth rates announced for some twelve industrial products, it is highly unlikely that the total industrial output increased at an average rate of 12.2% as claimed. After the plan concluded, there was no new economic plan for two years, indications of both the plan's failure and the severity of the economic and planning problems confronting the economy in the mid-1980s. From 1998 to 2003, the government implemented a plan for scientific and technical development, which focused on the nation's IT and electronic industry. Growth and changes in the structure and ownership pattern of the economy also have changed the labor force. By 1958 individual private farmers, who once constituted more than 70% of the labor force, had been transformed into or replaced by state or collective farmers. Private artisans, merchants, and entrepreneurs had joined state or cooperative enterprises. In the industrial sector in 1963, the last year for which such data are available, there were 2,295 state enterprises and 642 cooperative enterprises. The size and importance of the state enterprises can be surmised by the fact that state enterprises, which constituted 78% of the total number of industrial enterprises, contributed 91% of total industrial output. Labor force (12.6 million)—by occupation: Statistics from North Korea's trade partners is collected by international organizations like the United Nations and the International Monetary Fund, and by the South Korean Ministry of Unification. It has also been estimated that imports of arms from the Soviet Union in the period 1988 to 1990 accounted for around 30% of the North Korea's total imports, and that between 1981 and 1989 North Korea earned approximately $4 billion from the export of arms, approximately 30% of North Korea's total exports in that period. The nominal dollar value of arms exports from North Korea in 1996 was estimated to have been around $50 million. North Korea's foreign trade deteriorated in the 1990s. After hitting the bottom of $1.4 billion in 1998, it recovered slightly. North Korea's trade total in 2002 was $2.7 billion: only about 50% of $5.2 billion in 1988, even in nominal US dollars. These figures exclude intra-Korean trade, deemed internal, which rose in 2002 to $641 million. During the late 2000s trade grew strongly, almost tripling between 2007 and 2011 to $5.6 billion, with much of the growth being with China. By about 2010 external trade had returned to 1990 levels, and by 2014 was near double 1990 levels, with trade with China increasing from 50% of total trade in 2005 to near 90% in 2014. In 2015, it was estimated that exports to China were $2.3 billion—83% of total exports of $2.83 billion. In addition to Kaesŏng and Kŭmgang-san, other special economic areas were established at Sinŭiju in the northwest (on the border with China), and at Rasŏn in the northeast (on the border with China and Russia). International sanctions impeded international trade to some degree, many related to North Korea's development of weapons of mass destruction. United States President Barack Obama approved an executive order in April 2011 that declared "the importation into the United States, directly or indirectly, of any goods, services, or technology from North Korea is prohibited". Operational sanctions included United Nations Security Council Resolutions 1695, 1718, 1874, 1928, 2087, and 2094. Reports in 2018 indicated that trade sanctions (bans on almost all exports and the freezing of overseas accounts) were seriously affecting the economy. The main paper Rodong Sinmun was running short of paper and was publishing only a third of its normal print run, two energy plants supplying electricity to Pyongyang had to be shut down intermittently due to lack of coal, causing blackouts, coal mines were operating under capacity due to lack of fuel, coal could not be transported due to lack of fuel and food rations had been cut by half. The Taep'oong International Investment Group of Korea is the official company that manages oversea investments to North Korea. North and South Korea's economic ties have fluctuated greatly over the past 30 years or so. In the late 1990s and most of the 2000s, North-South relations warmed under the Sunshine Policy of President Kim Dae-jung. Many firms agreed to invest in North Korea, encouraged by the South Korean government's commitment to cover their losses, should investment projects in the north fail to become profitable. Following a 1988 decision by the South Korean Government to allow trade with the North (see Reunification efforts since 1971), South Korean firms began to import North Korean goods. Direct trade with the South began in the fall of 1990 after the unprecedented September 1990 meeting of the two Korean Prime Ministers. Trade between the countries increased from $18.8 million in 1989 to $333.4 million in 1999, much of it processing or assembly work undertaken in the North. During this decade, the chairman of the South Korean company Daewoo visited North Korea and reached agreement on building a light industrial complex at Namp'o. In other negotiations, Hyundai Asan obtained permission to bring tour groups by sea to Kŭmgang-san on the North Korea's southeast coast (see Kŭmgang-san Tourist Region), and more recently to construct the Kaesŏng Industrial Park, near the Korean Demilitarized Zone (DMZ), at a cost of more than $1 billion. In response to the summit between Kim Jong-il and Kim Dae-jung in 2000, North and South Korea agreed in August 2000 to reconnect the section of the Seoul–Pyongyang Gyeongui Railway Line across the DMZ. In addition, the two governments said they would build a four-lane highway bypassing the truce village at Panmunjeom. TV commercials for Samsung's Anycall cell phone featuring North Korean dancer Cho Myong-ae and South Korea's Lee Hyo-ri were first broadcast on June 11, 2006. Trade with South Korea declined after Lee Myung-bak was elected President of South Korea in 2008, who reduced trade to put pressure on North Korea over nuclear matters. Trade with South Korea fell from $1.8 billion to $1.1 billion between 2007 and 2013, most of the remaining trade being through the Kaesŏng Industrial Park. The Park has been subject to frequent shutdowns due to political tensions. With the collapse of the Soviet Union, China has been North Korea's primary trading partner. Bilateral trade rose sharply after 2007. In 2007 trade between the two countries was $1.97 billion (₩1.7 trillion). By 2011 trade had increased to $5.6 billion (₩5.04 trillion). Trade with China represented 57% of North Korea's imports and 42% of exports. Chinese statistics for 2013 indicate that North Korean exports to China were nearly $3 billion, with imports of about $3.6 billion. Exports to China in 2015 were estimated at $2.3 billion. Some South Korean companies launched joint ventures in areas like animation and computer software, and Chinese traders have done a booming business back and forth across the China–North Korea border. In a 2007 survey of 250 Chinese operations in North Korea, a majority reported paying bribes. Robert Suter, who headed the Seoul office of Swedish-Swiss power generation company ABB, says ABB was staking out a position in North Korea, "It is the same as it was in China years ago. You had to be there and you had to build trust." A number of South Korean enterprises were mainly active in a specially developed industrial zone in Kaesong Industrial Region and Chinese enterprises were known to be involved in a variety of activities in trade and manufacturing in North Korea. European enterprises founded in 2005 the European Business Association (EBA), Pyongyang, a "de facto" chamber of commerce representing a number of European-invested joint ventures and other businesses. Ch'ongryŏn, the pro-North Korean General Association of Korean Residents in Japan, broadcast on their TV channel in 2008 a TV film in three parts featuring foreign investment and business in North Korea. This film was put on a YouTube channel called "BusinessNK" and could be watched together with a number of other videos on foreign joint ventures as well as other investment and business activities in North Korea. Though no international banks operated in the isolated socialist state in 2013, foreign companies were said to be increasingly interested in dealing with North Korea. A flat LCD television factory in North Korea was funded by the Ch'ongryŏn in 2010. The Rason Special Economic Zone was established in the early 1990s, in the northeastern corner of the country bordering China and Russia. In June 2011, an agreement with China was made to establish a joint free trade area on North Korea's Hwanggumpyong and Wihwa Islands and China's border area near Dandong. North Korea designated over a dozen new special economic zones in 2013 and 2014.
https://en.wikipedia.org/wiki?curid=21260
Telecommunications in North Korea Telecommunications in North Korea refers to the communication services available in North Korea. North Korea has not fully adopted mainstream Internet technology due to its isolationist policies. North Korea has an adequate telephone system, with 1.18 million fixed lines available in 2008. However, most phones are only installed for senior government officials. Someone wanting a phone installed must fill out a form indicating their rank, why he/she wants a phone, and how he/she will pay for it. Most of these are installed in government offices, collective farms, and state-owned enterprises (SOEs), with only perhaps 10 percent controlled by individuals or households. By 1970 automatic switching facilities were in use in Pyongyang, Sinŭiju, Hamhŭng, and Hyesan. A few public telephone booths were beginning to appear in Pyongyang around 1990. In the mid-1990s, an automated exchange system based on an E-10A system produced by Alcatel joint-venture factories in China was installed in Pyongyang. North Koreans announced in 1997 that automated switching had replaced manual switching in Pyongyang and 70 other locales. North Korean press reported in 2000 that fiber-optic cable had been extended to the port of Nampho and that North Pyong'an Province had been connected with fiber-optic cable. In November 2002, mobile phones were introduced to North Korea and by November 2003, 20,000 North Koreans had bought mobile phones. There was a ban on cell phones from 2004–2008. In December 2008, a new mobile phone service was launched in Pyongyang, operated by Egyptian company Orascom, with current plans to expand coverage to all parts of the country. The official name of the 3G mobile phone service in North Korea is called Koryolink, and is a joint venture between Orascom and the state-owned Korea Post and Telecommunications Corporation (KPTC). There has been a large demand for the service since it was launched. In May 2010, more than 120,000 North Koreans owned mobile phones; this number had increased to 301,000 by September 2010, 660,000 by August 2011, and 900,000 by December 2011. Orascom reported 432,000 North Korean subscribers after two years of operation (December 2010), increasing to 809,000 by September 2011, and exceeding one million by February 2012. By April 2013 subscriber numbers neared two million. By 2015 the figure had grown to three million. In 2011, 60% of Pyongyang's citizens between the age of 20 and 50 had a cellphone. On June 15, 2011, StatCounter.com confirmed that some North Koreans use Apple's iPhones, as well as Nokia's and Samsung's smartphones. In November 2011, no mobile phones could dial into or out of the country, and there was no Internet connection. A 3G network covered 94 percent of the population, but only 14 percent of the territory. Koryolink has no international roaming agreements. Pre-paid SIM cards can be purchased by visitors to North Korea to make international (but not domestic) calls. Prior to January 2013, foreigners had to surrender their phones at the border crossing or airport before entering the country, but with the availability of local SIM cards this policy is no longer in place. Internet access, however, is only available to resident foreigners and not tourists. North Korean mobile phones use a digital signature system to prevent access to unsanctioned files, and log usage information that can be physically inspected. A survey in 2017 found that 69% of households had a mobile phone. In September 2019 a previously unknown company Kwangya Trading Company (광야무역회사의) announced the release of a cell phone for North Korean consumer use called the Kimtongmu. Although state-run media reports that the phone was developed by North Korean outlets it is likely sourced rather from a Chinese OEM manufacturer and outfitted with North Korean software. North Korea has had a varying number of connections to other nations. Currently, international fixed line connections consist of a network connecting Pyongyang to Beijing and Moscow, and Chongjin to Vladivostok. Communications were opened with South Korea in 2000. On May 2006 TransTeleCom Company and North Korea's Ministry of Communications have signed an agreement for the construction and joint operation of a fiber-optic transmission line in the section of the Khasan–Tumangang railway checkpoint in the North Korea-Russia border. This is the first direct land link between Russia and North Korea. TTC's partner in the design, construction, and connection of the communication line from the Korean side to the junction was Korea Communication Company of North Korea's Ministry of Communications. The technology transfer was built around STM-1 level digital equipment with the possibility of further increasing bandwidth. The construction was completed in 2007. Since joining Intersputnik in 1984, North Korea has operated 22 lines of frequency-division multiplexing and 10 lines of single channel per carrier for communication with Eastern Europe. and in late 1989 international direct dialing service through microwave link was introduced from Hong Kong. A satellite ground station near Pyongyang provides direct international communications using the International Telecommunications Satellite Corporation (Intelsat) Indian Ocean satellite. A satellite communications center was installed in Pyongyang in 1986 with French technical support. An agreement to share in Japan's telecommunications satellites was reached in 1990. North Korea joined the Universal Postal Union in 1974 but has direct postal arrangements with only a select group of countries. Following the agreement with UNDP, the Pyongyang Fiber Optic Cable Factory was built in April 1992 and the country's first optical fiber cable network consisting of 480 Pulse Code Modulation (PCM) lines and 6 automatic exchange stations from Pyongyang to Hamhung (300 kilometers) was installed in September 1995. Moreover, the nationwide land leveling and rezoning campaign initiated by Kim Jong-il in Kangwon province in May 1998 and in North Pyongan province in January 2000 facilitated the construction of provincial and county fiber optic lines, which were laid by tens of thousands of Korean People's Army (KPA) soldier-builders and provincial shock brigade members mobilized for the large-scale public works projects designed to rehabilitate the hundreds of thousands of hectares of arable lands devastated by the natural disasters in the late 1990s. Broadcasting in North Korea is tightly controlled by the state and is used as a propaganda arm of the ruling Korean Workers' Party. The Korean Central Television station is located in Pyongyang, and there also are stations in major cities, including Chŏngjin, Kaesŏng, Hamhŭng, Haeju, and Sinŭiju. There are three channels in Pyongyang but only one channel in other cities. Imported Japanese-made color televisions have a North Korean brand name superimposed, but nineteen-inch black-and-white sets have been produced locally since 1980. One estimate placed the total number of television sets in use in the early 1990s at 250,000 sets. A study in 2017 found that 98% of households had a TV set. Visitors are not allowed to bring a radio. As part of the government's information blockade policy, North Korean radios and televisions must be modified to receive only government stations. These modified radios and televisions should be registered at special state department. They are also subject to inspection at random. The removal of the official seal is punishable by law. In order to buy a TV-set or a radio, North Korean citizens are required to get special permission from officials at their places of residence or employment. North Korea has two AM radio broadcasting networks, (Voice of Korea) and Korean Central Broadcasting Station, and one FM network, . All three networks have stations in major cities that offer local programming. There also is a powerful shortwave transmitter for overseas broadcasts in several languages. The official government station is the Korean Central Broadcasting Station (KCBS), which broadcasts in Korean. In 1997 there were 3.36 million radio sets. Kwangmyong is a North Korean "walled garden" national intranet opened in 2000. It is accessible from within North Korea's major cities, counties, as well as universities and major industrial and commercial organizations. Kwangmyong has 24-hour unlimited access by dial-up telephone line. A survey in 2017 found that 19% of households had a computer, but that only 1% nationally and 5% in Pyongyang had access to the intranet. In August 2016, it was reported that North Korea had launched a state-approved video streaming service which has been likened to Netflix. The service, known as "Manbang" (meaning everyone) uses a set-top box to stream live TV, on-demand video and newspaper articles (from the state newspaper Rodong Sinmun")" over the internet. The service is only available to citizens in Pyongyang, Siniju and Sariwon. The state TV channel Korean Central Television (KCTV) described the service as a "respite from radio interference". In 2018, North Korea unveiled a new wi-fi service called Mirae ("Future"), which allowed mobile devices to access the intranet network in Pyongyang. North Korea's main connection to the international Internet is through a fiber-optic cable connecting Pyongyang with Dandong, China, crossing the China–North Korea border at Sinuiju. Internet access is provided by China Unicom. Before the fiber connection, international Internet access was limited to government-approved dial-up over land lines to China. In 2003 a joint venture between businessman Jan Holterman in Berlin and the North Korean government called KCC Europe brought the commercial Internet to North Korea. The connection was established through an Intelsat satellite link from North Korea to servers located in Germany. This link ended the need to dial ISPs in China. In 2007 North Korea successfully applied at ICANN for the .kp country code top-level domain (ccTLD). KCC Europe administered the domain from Berlin, and also hosted a large number of websites . In 2009 Internet service provider Star Joint Venture Co., a joint venture between the North Korean government's Post and Telecommunications Corporation and Thailand-based Loxley Pacific, took control of North Korea's Internet and address allocation. The satellite link was phased out in favour of the fiber connection and is currently only used as a backup line. In October 2017 a large scale DDoS attack on the main China connection led to a second Internet connection taken into service. This connects North Korea through a fiber optic cable with Vladivostok, crossing the Russia-North Korea border at Tumangang. Internet access is provided by TransTelekom, a subsidiary of Russian national railway operator Russian Railways. North Korea's first Internet café opened in 2002 as a joint venture with South Korean Internet company Hoonnet. It is connected via a land line to China. Foreign visitors can link their computers to the Internet through international phone lines available in a few hotels in Pyongyang. In 2005 a new Internet café opened in Pyongyang, connected not through China, but through the North Korean satellite link. Content is most likely filtered by North Korean government agencies. Since February 2013, foreigners have been able to access the internet using the 3G phone network. "A Quiet Opening: North Koreans in a Changing Media Environment" a study commissioned by the U.S. State Department and conducted by Intermedia and released May 10, 2012 shows that despite extremely strict regulations and draconian penalties North Koreans, particularly elite elements, have increasing access to news and other media outside the state-controlled media authorized by the government. While access to the Internet is tightly controlled, radio and DVDs are common media accessed, and in border areas, television. As of 2011, USB flash drives were selling well in North Korea, primarily used for watching South Korean dramas and films on personal computers.
https://en.wikipedia.org/wiki?curid=21261
Transport in North Korea Transport in North Korea is constrained by economic problems and government restrictions. Public transport predominates, and most of it is electrified. Travel to North Korea is tightly controlled. The standard route to and from North Korea is by plane or train via Beijing. Transport directly to and from South Korea was possible on a limited scale from 2003 until 2008, when a road was opened (bus tours, no private cars). Freedom of movement in North Korea is also limited, as citizens are not allowed to move around freely inside their country. On October 14, 2018, North and South Korea agreed to restore inter-Korean rail and road transportation. On November 22, 2018, North and South Korea reopened a road on the Korean border which had been closed since 2004. On November 30, 2018, inter-Korean rail transportation resumed when a South Korean train crossed into North Korea for the first time since November 2008. On December 8, 2018, a South Korean bus crossed into North Korea. Fuel constraints and the near absence of private automobiles have relegated road transportation to a secondary role. The road network was estimated to be around in 1999, up from between and in 1990, of which only , 7.5%, are paved. However, "The World Factbook" (published by the US Central Intelligence Agency) lists of roads with only paved as of 2006. As for the road quality, drivers will often swerve and change lanes to evade potholes, and this includes going into opposite-direction lanes at times. Likewise, sections under repair may not be properly signalled, so oncoming traffic should always be expected even on a divided motorway. There are three major multilane highways: a expressway connecting Pyongyang and Wonsan on the east coast, a expressway connecting Pyongyang and its port, Nampo, and a four-lane motorway linking Pyongyang and Kaesong. The overwhelming majority of the estimated 264,000 vehicles in use in 1990 were for the military. Rural bus service connects all villages, and cities have bus and tram services. Since 1945/1946, there is right-hand traffic on roads. In cities, driving speeds are set by which lane a driver is in. The speed limits are , , and for the first, second, and subsequent (if existing) lanes "from the right", respectively. A white-on-blue sign informs about this. The leftmost lane, if it is number 3 from the right or higher and is not a turning lane, is often left vacant, even by tourist buses, while the second-from-right lane is generally used to overtake vehicles from lane one, such as public transport buses and trams. Besides the blue in-city sign, all other occasions, such as motorways and roads outside cities, use the more widely known red-circle-with-number-inside sign to post speed limits. On motorways, the typical limit is and for lanes from the right, respectively, as posted on the Pyongyang-Kaesong highway, for example. The rightmost lane of a motorway is sometimes, as seen on the Pyongyang–Myohyang highway, limited to near on-ramp joining points. Automobile transportation is further restricted by a series of regulations. According to North Korean exile Kim Ji-ho, unless a driver receives a special permit it is forbidden to drive alone (the driver must carry passengers). Other permits are a military mobilization permit (to transport soldiers in times of war), a certificate of driver training (to be renewed every year), a fuel validity document (a certificate confirming that the fuel was purchased from an authorized source), and a mechanical certificate (to prove that the car is in working order). Although it drives on the right, North Korea has imported various used right-hand drive RHD vehicles from Japan (through Russia), from tourist buses to Toyota Land Cruisers and HiAces. As of 2017, electric bicycles are becoming popular in Pyongyang; about 5% of bicycles are electric. Both locally produced and Chinese electric bicycles were available. As of 2016 there is of road which is 25% of South Korea's road system in length. There is a mix of locally built and imported trolleybuses and trams in the major urban centres of North Korea. Earlier fleets were obtained from Europe and China. The Korean State Railway is the only rail operator in North Korea. It has a network of over of standard gauge and of narrow gauge () lines; as of 2007, over of the standard gauge (well over 80%), along with of the narrow gauge lines are electrified. The narrow gauge segment runs in the Haeju peninsula. Because of lack of maintenance on the rail infrastructure and vehicles, the travel time by rail is increasing. It has been reported that the trip from Pyongyang to Kaesong can take up to six hours. Water transport on the major rivers and along the coasts plays a growing role in freight and passenger traffic. Except for the Yalu and Taedong rivers, most of the inland waterways, totaling , are navigable only by small boats. Coastal traffic is heaviest on the eastern seaboard, whose deeper waters can accommodate larger vessels. The major ports are Nampho on the west coast and Rajin, Chongjin, Wonsan, and Hamhung on the east coast. The country's harbor loading capacity in the 1990s was estimated at almost 35 million tons a year. There is a continuing investment in upgrading and expanding port facilities, developing transportation—particularly on the Taedong River—and increasing the share of international cargo by domestic vessels. In the early 1990s, North Korea possessed an oceangoing merchant fleet, largely domestically produced, of 68 ships (of at least 1,000 gross-registered tons), totalling 465,801 gross-registered tons (), which included 58 cargo ships and two tankers. As of 2008, this has increased to a total of 167 vessels consisting mainly of cargo and tanker ships. North Korea maintains the "Man Gyong Bong 92", a ferry connecting Rajin and Vladivostok, Russia. North Korea's international air connections are limited in frequency and numbers. As of 2011, scheduled flights operate only from Pyongyang's Pyongyang Sunan International Airport to Beijing, Dalian, Shenyang, Shanghai, Bangkok, Kuala Lumpur, Singapore, Moscow, Khabarovsk, Vladivostok, and Kuwait International Airport. Charters to other destinations operate as per demand. Prior to 1995 many routes to Eastern Europe were operated including services to Sofia, Belgrade, Prague, and Budapest, along with others. Air Koryo is the country's national airline. , Air China also operates flights between Beijing and Pyongyang. In 2013, MIAT Mongolian Airlines began operating direct charter services from Ulaanbattar to Pyongyang with Boeing 737-800 aircraft. Internal flights are available between Pyongyang, Hamhung, Haeju (HAE), Hungnam (HGM), Kaesong (KSN), Kanggye, Kilju, Najin (NJN), Nampo (NAM), Sinuiju (SII), Samjiyon, Wonsan (WON), Songjin (SON), and Chongjin (CHO). All civil aircraft are operated by Air Koryo, which has a fleet of 19 passenger and cargo aircraft, all of which are Soviet or more modern Russian types. As of 2013, the CIA estimates that North Korea has 82 usable airports, 39 of which have permanent-surface runways. It was reported that North Korean air traffic controllers had been cut off from the international global satellite communications network in 2017 because North Korea had not made the required payments. Traffic controllers at Pyongyang Sunan International Airport had to use conventional telephone lines to inform their counterparts at Incheon International airport that the flight containing North Korean delegates to the 2018 Winter Olympic Games in South Korea had taken off. Road vehicles in North Korea bear distance stars. These are paint markings which display how far the particular vehicle has traveled without incident. Each star represents travelled without an accident. The bus in this example has three stars, indicating that it has traveled over without a crash. The DPRK licence plate background color denotes the vehicle type; Blue - State vehicle Black - Military vehicle Yellow - Private vehicle - permitted persons who have contributed greatly to DPRK Green - Foreign Non-governmental Organizations (NGO) Red - Diplomatic
https://en.wikipedia.org/wiki?curid=21262
Korean People's Army The Korean People's Army (KPA; , ) is the "de facto" military forces of North Korea and the armed wing of the Workers' Party of Korea. Under the "Songun" policy, it is the central institution of North Korean society. Kim Jong-un serves as Supreme Commander and the chairman of the Central Military Commission. The KPA consists of five branches: Ground Force, the Navy, the Air Force, the Strategic Rocket Forces, and the Special Operation Force. The KPA considers its primary adversaries to be the South Korean military and United States Forces Korea, across the Korean Demilitarized Zone, as it has since the Armistice Agreement of July 1953. , with 5,889,000 paramilitary personnel, it is the largest paramilitary organisation in the world. This number serves as 25% of the North Korean population. Kim Il-sung's anti-Japanese guerrilla army, the Korean People's Revolutionary Army, was established on 25 April 1932. This revolutionary army was transformed into the regular army on 8 February 1948. Both these are celebrated as army days, with decennial anniversaries treated as major celebrations, except from 1978 to 2014 when only the 1932 anniversary was celebrated. In 1939, the Korean Volunteer Army (KVA), was formed in Yan'an, China. The two individuals responsible for the army were Kim Tu-bong and Mu Chong. At the same time, a school was established near Yan'an for training military and political leaders for a future independent Korea. By 1945, the KVA had grown to approximately 1,000 men, mostly Korean deserters from the Imperial Japanese Army. During this period, the KVA fought alongside the Chinese communist forces from which it drew its arms and ammunition. After the defeat of the Japanese, the KVA accompanied the Chinese communist forces into eastern Jilin, intending to gain recruits from ethnic Koreans in China, particularly from Yanbian, and then enter Korea. Just after World War II and during the Soviet Union's occupation of the part of Korea north of the 38th Parallel, the Soviet 25th Army headquarters in Pyongyang issued a statement ordering all armed resistance groups in the northern part of the peninsula to disband on 12 October 1945. Two thousand Koreans with previous experience in the Soviet army were sent to various locations around the country to organise constabulary forces with permission from Soviet military headquarters, and the force was created on 21 October 1945. The headquarters felt a need for a separate unit for security around railways, and the formation of the unit was announced on 11 January 1946. That unit was activated on 15 August of the same year to supervise existing security forces and creation of the national armed forces. Military institutes such as the Pyongyang Academy (became No. 2 KPA Officers School in Jan. 1949) and the Central Constabulary Academy (became KPA Military Academy in Dec. 1948) soon followed for the education of political and military officers for the new armed forces. After the military was organised and facilities to educate its new recruits were constructed, the Constabulary Discipline Corps was reorganised into the Korean People's Army General Headquarters. The previously semi-official units became military regulars with the distribution of Soviet uniforms, badges, and weapons that followed the inception of the headquarters. The State Security Department, a forerunner to the Ministry of People's Defense, was created as part of the Interim People's Committee on 4 February 1948. The formal creation of the Korean People's Army was announced on four days later on 8 February, the day after the Fourth Plenary Session of the People's Assembly approved the plan to separate the roles of the military and those of the police, seven months before the government of the Democratic People's Republic of Korea was proclaimed on 9 September 1948. In addition, the Ministry of State for the People's Armed Forces was established, which controlled a central guard battalion, two divisions, and an independent mixed and combined arms brigade. Before the outbreak of the Korean War, Joseph Stalin equipped the KPA with modern tanks, trucks, artillery, and small arms (at the time, the South Korean Army had nothing remotely comparable either in numbers of troops or equipment). During the opening phases of the Korean War in 1950, the KPA quickly drove South Korean forces south and captured Seoul, only to lose 70,000 of their 100,000-strong army in the autumn after U.S. amphibious landings at the Battle of Incheon and a subsequent drive to the Yalu River. On 4 November, China openly staged a military intervention. On 7 December, Kim Il-sung was deprived of the right of command of KPA by China. The KPA subsequently played a secondary minor role to Chinese forces in the remainder of the conflict. By the time of the Armistice in 1953, the KPA had sustained 290,000 casualties and lost 90,000 men as POWs. In 1953, the Military Armistice Commission (MAC) was able to oversee and enforce the terms of the armistice. The Neutral Nations Supervisory Commission (NNSC), made up of delegations from Czechoslovakia, Poland, Sweden and Switzerland, carried out inspections to ensure implementation of the terms of the Armistice that prevented reinforcements or new weapons being brought into Korea. Soviet thinking on the strategic scale was replaced since December 1962 with a people's war concept. The Soviet idea of direct warfare was replaced with a Maoist war of attrition strategy. Along with the mechanisation of some infantry units, more emphasis was put on light weapons, high-angle indirect fire, night fighting, and sea denial. Until 1977, original Korean People's Army's official date of establishment was 8 February 1948. But in 1978, changed to 25 April 1932, Kim Il-sung's anti-Japanese guerrilla army – Joseon People's Revolutionary Army, considered the predecessor of the Korean People's Army, was formed on 25 April 1932. But Officially, date of establishment was back to 8 February 1948 in 2018. The primary path for command and control of the KPA extends through the State Affairs Commission which was led by its chairman Kim Jong-il until 2011, to the Ministry of People's Armed Forces and its General Staff Department. From there on, command and control flows to the various bureaus and operational units. A secondary path, to ensure political control of the military establishment, extends through the Workers' Party of Korea's Central Military Commission of the Workers' Party of Korea. Since 1990, numerous and dramatic transformations within the have led to the current command and control structure. The details of the majority of these changes are simply unknown to the world. What little is known indicates that many changes were the natural result of the deaths of the aging leadership including Kim Il-sung (July 1994), Minister of People's Armed Forces O Chin-u (February 1995) and Minister of People's Armed Forces Choi Kwang (February 1997). The vast majority of changes were undertaken to secure the power and position of Kim Jong-il. Formerly the State Affairs Commission, from its founding in 1972 (originally the National Defence Commission), was part of the Central People's Committee (CPC) while the Ministry of the People's Armed Forces, from 1982 onward, was under direct presidential control. At the Eighteenth session of the sixth Central People's Committee, held on 23 May 1990, the SAC became established as its own independent commission, rising to the same status as the CPC (now the Cabinet of North Korea) and not subordinated to it, as was the case before. Concurrent with this, Kim Jong-il was appointed first vice-chairman of the State Affairs Commission. The following year, on 24 December 1991, Kim Jong-il was appointed Supreme Commander of the Korean People's Army. Four months later, on 20 April 1992, Kim Jong-il was awarded the rank of Marshal and his father, in virtue of being the KPA's founding commander in chief, became Grand Marshal as a result and one year later he became the Chairman of the State Affairs Commission, by now under Supreme People's Assembly control under the then 1992 constitution as amended. Almost all officers of the KPA began their military careers as privates; only very few people are admitted to a military academy without prior service. The results is an egalitarian military system where officers are familiar with the life of a military private and "military nobility" is all but nonexistent. Within the KPA, between December 1991 and December 1995, nearly 800 high officers (out of approximately 1,200) received promotions and preferential assignments. Three days after Kim Jong-il became Marshal, eight generals were appointed to the rank of Vice-Marshal. In April 1997, on the 85th anniversary of Kim Il-sung's birthday, Kim Jong-il promoted 127 general and admiral grade officers. The following April he ordered the promotions of another 22 generals and flag officers. Along with these changes, many KPA officers were appointed to influential positions within the Korean Workers' Party. These promotions continue today, simultaneous with the celebration of Kim Il-sung's birthday and the KPA anniversary celebrations every April and since recently in July to honour the end of the Korean War. Under Kim Jong-il's leadership, political officers dispatched from the party monitored every move of a general's daily life, according to analysts similar to the work of Soviet political commissars during the early and middle years of the military establishment. Today the KPA exercises full control of both the Politburo and the Central Military Commission of the WPK, the KPA General Political and General Staff Departments and the Ministry of the People's Armed Forces, all having KPA representatives with a minimum general officer rank. Following changes made during the 4th session of the 13th Supreme People's Assembly on 29 June 2016, the State Affairs Commission has overseen the Ministry of the People's Armed Forces as part of its systemic responsibilities. All members of the State Affairs Commission have membership status (regular or alternate) on the WPK Political Bureau. North Korea has universal conscription for males and selective conscription for females with many pre- and post-service requirements. Article 86 of the North Korean Constitution states: "National defence is the supreme duty and honour of citizens. Citizens shall defend the country and serve in the armed forces as required by law." KPA soldiers serve three years of military service in the KPA, which also runs its own factories, farms and trading arms. The Young Red Guards are the youth cadet corps of the KPA for secondary level and university level students. Every Saturday, they hold mandatory 4-hour military training drills, and have training activities on and off campus to prepare them for military service when they turn 18 or after graduation, as well as for contingency measures in peacetime. Under the Ministry of People's Security and the wartime control of the Ministry of People's Armed Forces, and formerly the Korean People's Security Forces, the Korean People's Internal Security Forces forms the national gendarmerie and civil defence force of the KPA. The KPISF has its units in various fields like civil defence, traffic management, civil disturbance control, and local security. It has its own special forces units. The service shares the ranks of the KPA (with the exception of Marshals) but wears different uniforms. The KPA's annual budget is approximately US$6 billion. In 2009, the U.S. Institute for Science and International Security reported that North Korea may possess fissile material for around two to nine nuclear warheads. The North Korean Songun ("Military First") policy elevates the KPA to the primary position in the government and society. According to North Korea's state news agency, military expenditures for 2010 made up 15.8 percent of the state budget. Most analyses of North Korea's defence sector, however, estimate that defence spending constitutes between one-quarter and one-third of all government spending. As of 2003, according to the International Institute of Strategic Studies, North Korea's defence budget consumed some 25 percent of central government spending. In the mid-1970s and early 1980s, according to figures released by the Polish Arms Control and Disarmament Agency, between 32 and 38 percent of central government expenditures went towards defence. North Korea sells missiles and military equipment to many countries worldwide. In April 2009, the United Nations named the Korea Mining and Development Trading Corporation (KOMID) as North Korea's primary arms dealer and main exporter of equipment related to ballistic missiles and conventional weapons. It also named Korea Ryonbong as a supporter of North Korea's military related sales. Historically, North Korea has assisted a vast number of revolutionary, insurgent and terrorist groups in more than 62 countries. A cumulative total of more than 5,000 foreign personnel have been trained in North Korea, and over 7,000 military advisers, primarily from the Reconnaissance General Bureau, have been dispatched to some forty-seven countries. Some of the organisations which received North Korean aid include the Polisario Front, Janatha Vimukthi Peramuna, the Communist Party of Thailand, the Palestine Liberation Organization and the Islamic Revolutionary Guard Corps. The Zimbabwean Fifth Brigade received its initial training from KPA instructors. North Korean troops allegedly saw combat during the Libyan–Egyptian War and the Angolan Civil War. Up to 200 KPAF pilots took part in the Vietnam War, scoring several kills against US aircraft. Two KPA anti-aircraft artillery regiments were sent to North Vietnam as well. North Korean instructors trained Hezbollah fighters in guerrilla warfare tactics around 2004, prior to the Second Lebanon War. During the Syrian Civil War, Arabic-speaking KPA officers may have assisted the Syrian Arab Army in military operations planning and have supervised artillery bombardments in the Aleppo area. The Korean People's Army Ground Force (KPAGF) is the main branch of the Korean People's Army responsible for land-based military operations. It is the "de facto" army of North Korea. The Korean People's Navy is organised into two fleets which are not able to support each other. The East Fleet is headquartered at T'oejo-dong and the West Fleet at Nampho. A number of training, shipbuilding and maintenance units and a naval air wing report directly to Naval Command Headquarters at Pyongyang. The majority of the Navy's ships are assigned to the East Fleet. Due to the short range of most ships, the two fleets are not known to have ever conducted joint operations or shared vessels. The KPAF is also responsible for North Korea's air defence forces through the use of anti-aircraft artillery and surface-to-air (SAM) missiles. While much of the equipment is outdated, the high saturation of multilayered, overlapping, mutually supporting air defence sites provides a formidable challenge to enemy air attacks. The Korean People's Strategic Rocket Forces is a major division of the KPA that controls the DPRK's nuclear and conventional strategic missiles. It is mainly equipped with surface-to-surface missiles of Soviet and Chinese design, as well as locally developed long-range missiles. The special forces of the Korean People's Army are asymmetric forces with a total troop size of 200,000. Since the Korean War (North Korea: the Korean War of Liberation), it has continued to play a role of concentrating infiltration of troops into the territory of the Republic of South Korea and conducting sabotage. After the Korean War, North Korea maintained a powerful, but smaller military force than that of South Korea. In 1967 the KPA forces of about 345,000 were much smaller than the South Korean ground forces of about 585,000. North Korea's relative isolation and economic plight starting from the 1980s has now tipped the balance of military power into the hands of the better-equipped South Korean military. In response to this predicament, North Korea relies on asymmetric warfare techniques and unconventional weaponry to achieve parity against high-tech enemy forces. North Korea is reported to have developed a wide range of technologies towards this end, such as stealth paint to conceal ground targets, midget submarines and human torpedoes, blinding laser weapons, and probably has a chemical weapons program and is likely to possess a stockpile of chemical weapons. The Korean People's Army operates ZM-87 anti-personnel lasers, which are banned under the United Nations Protocol on Blinding Laser Weapons. Since the 1980s, North Korea has also been actively developing its own cyber warfare capabilities. As of 2014, the secretive Bureau 121 – the elite North Korean cyber warfare unit – comprises approximately 1,800 highly trained hackers. In December 2014, the Bureau was accused of hacking Sony and making threats, leading to the cancellation of "The Interview", a political satire comedy film based on the assassination of Kim Jong-un. The Korean People's Army has also made advances in electronic warfare by developing GPS jammers. Current models include vehicle-mounted jammers with a range of -. Jammers with a range of more than 100 km are being developed, along with electromagnetic pulse bombs. The Korean People's Army has also made attempts to jam South Korean military satellites. North Korea does not have satellites capable of obtaining satellite imagery useful for military purposes, and appears to use imagery from foreign commercial platforms. Despite the general fuel and ammunition shortages for training, it is estimated that the wartime strategic reserves of food for the army are sufficient to feed the regular troops for 500 days, while fuel and ammunition – amounting to 1.5 million and 1.7 million tonnes respectively – are sufficient to wage a full-scale war for 100 days. The KPA does not operate aircraft carriers, but has other means of power projection. Korean People's Air Force Il-76MD aircraft provide a strategic airlift capacity of 6,000 troops, while the Navy's sea lift capacity amounts to 15,000 troops. The Strategic Rocket Forces operate more than 1,000 ballistic missiles according to South Korean officials in 2010, although the U.S. Department of Defense reported in 2012 that North Korea has fewer than 200 missile launchers. North Korea acquired 12 Foxtrot class and Golf-II class missile submarines as scrap in 1993. Some analysts suggest that these have either been refurbished with the help of Russian experts or their launch tubes have been reverse-engineered and externally fitted to regular submarines or cargo ships. However GlobalSecurity reports that the submarines were rust-eaten hulks with the launch tubes inactivated under Russian observation before delivery, and the U.S. Department of Defense does not list them as active. A photograph of Kim Jong-un receiving a briefing from his top generals on 29 March 2013 showed a list that purported to show that the military had a minimum of 40 submarines, 13 landing ships, 6 minesweepers, 27 support vessels and 1,852 aircraft. The Korean People's Army operates a very large amount of equipment, including 4,100 tanks, 2,100 APCs, 8,500 field artillery pieces, 5,100 multiple rocket launchers, 11,000 air defence guns and some 10,000 MANPADS and anti-tank guided missiles in the Ground force; about 500 vessels in the Navy and 730 combat aircraft in the Air Force, of which 478 are fighters and 180 are bombers. North Korea also has the largest special forces in the world, as well as the largest submarine fleet. The equipment is a mixture of World War II vintage vehicles and small arms, widely proliferated Cold War technology, and more modern Soviet or locally produced weapons. North Korea possesses a vast array of long range artillery in shelters just north of the Korean Demilitarized Zone. It has been a long-standing cause for concern that a preemptive strike or retaliatory strike on Seoul using this arsenal of artillery north of the Demilitarized Zone would lead to a massive loss of life in Seoul. Estimates on how many people would die in an attack on Seoul vary. When the Clinton administration mobilised forces over the reactor at Yongbyon in 1994, planners concluded that retaliation by North Korea against Seoul could kill 40,000 people. Other estimates projects hundreds of thousands or possibly millions of fatalities if North Korea uses chemical munitions. The KPA possess a variety of Chinese and Soviet sourced equipment and weaponry, as well as locally produced versions and improvements of the former. Soldiers are mostly armed with indigenous Kalashnikov-type rifles as the standard issue weapon. Front line troops are issued the Type 88, while the older Type 58 assault rifle and Type 68A/B have been shifted to rear echelon or home guard units. A rifle of unknown nomenclature was seen during the 2017 'Day of the Sun' military parade, appearing to consist of a grenade launcher and a standard assault rifle, similar to the U.S OICW or South Korean S&T Daewoo K11. It is however more likely that the "grenade launcher" (the large tube present under the rifle) is actually a large helical magazine, similar to that used by the Bizon SMG. North Korea generally designates rifles as "Type XX", similar to the Chinese naming system. On 15 November 2018, North Korea successfully tested a "newly developed ultramodern tactical weapon". Leader Kim Jong Un observed the test at the Academy of Defense Science and called it a "decisive turn" in bolstering the combat power of the North Korean army. The U.S. Department of Defense believes North Korea probably has a chemical weapons program and is likely to possess a stockpile of such weapons. North Korea has tested a series of different missiles, including short-, medium-, intermediate-, and intercontinental- range, and submarine-launched ballistic missiles. Estimates of the country's nuclear stockpile vary: some experts believe Pyongyang has between fifteen and twenty nuclear weapons, while U.S. intelligence believes the number to be between thirty and sixty bombs. The regime conducted two tests of an intercontinental ballistic missile (ICBM) capable of carrying a large nuclear warhead in July 2017. The Pentagon confirmed North Korea's ICBM tests, and analysts estimate that the new missile has a potential range of and, if fired on a flatter trajectory, could be capable of reaching mainland U.S. territory. On 9 October 2006, the North Korean government announced that it had unsuccessfully attempted a nuclear test for the first time. Experts at the United States Geological Survey and Japanese seismological authorities detected an earthquake with a preliminary estimated magnitude of 4.3 from the site in North Korea, proving the official claims to be true. North Korea also went on to claim that it had developed a nuclear weapon in 2009. It is widely believed to possess a stockpile of relatively simple nuclear weapons. The IAEA has met Ri Je Son, The Director General of the General Department of Atomic Energy (GDAE) of the DPRK, to discuss nuclear matters. Ri Je Son was also mentioned in this role in 2002 in a United Nations article. On 3 September 2017, the North Korean leadership announced that it had conducted a nuclear test with what it claimed to be its first hydrogen bomb detonation. The detonation took place at an underground location at the Punggye-Ri nuclear test site in North Hamgyong Province at 12:00 pm local time. South Korean officials claimed the test yielded 50 kilotons of explosive force, with many international observers claiming the test likely involved some form of a thermonuclear reaction.
https://en.wikipedia.org/wiki?curid=21263
Foreign relations of North Korea The foreign relations of North Korea – officially the Democratic People's Republic of Korea (DPRK) – have been shaped by its conflict with capitalist countries like South Korea and its historical ties with world communism. Both the government of North Korea and the government of South Korea (officially the Republic of Korea) claim to be the sole legitimate government of the whole of Korea. The Korean War in the 1950s failed to resolve the issue, leaving North Korea locked in a military confrontation with South Korea and the United States Forces Korea across the Demilitarized Zone. At the start of the Cold War, North Korea only had diplomatic recognition by Communist countries. Over the following decades, it established relations with developing countries and joined the Non-Aligned Movement. When the Eastern Bloc collapsed in the years 1989–1992, North Korea made efforts to improve its diplomatic relations with developed capitalist countries. At the same time, there were international efforts to resolve the confrontation on the Korean peninsula (known as the Korean conflict). When North Korea acquired nuclear weapons after the demise of the Soviet Union, its main economic backer, resolving the crisis became a more important issue to much of the international community. North Korea is considered a rogue state, and is not signatory to the Non-proliferation treaty (NPT)—in fact, it was formerly an acceder to the treaty, which it had violated, but withdrew in 2003 after banishing the International Atomic Energy Agency. Its nuclear program is seen as part of North Korea's strategy of "nuclear coercion", which analysts have posed in terms of North Korea's regime survival. In 2018, North Korean leader Kim Jong-un made a sudden peace overture towards South Korea and the United States. This led to the first face-to-face discussion between the State Chairman of North Korea and a sitting President of the United States. This is known as the 2018 Korean peace process. The Constitution of North Korea establishes the country's foreign policy. While Article 2 of the constitution describes the country as a "revolutionary state," Article 9 says that the country will work to achieve Korean reunification, maintain state sovereignty and political independence, and "national unity." Many articles specifically outline the country's foreign policy. Article 15 says that the country will "protect the democratic national rights of Korean compatriots overseas and their legitimate rights and interests as recognized by international law" and Article 17 explicates the basic ideals of the country's foreign policy: Other parts of the constitution explicate other foreign policies. Article 36 says that foreign trade by the DPRK will be conducted "by state organs, enterprises, and social, cooperative organizations" while the country will "develop foreign trade on the principles of complete equality and mutual benefit." Article 37 adds that the country will encourage "institutions, enterprises and organizations in the country to conduct equity or contractual joint ventures with foreign corporations and individuals, and to establish and operate enterprises of various kinds in special economic zones." Furthermore, Article 38 says that the DPRK will implement a protectionist tariff policy "to protect the independent national economy" while Article 59 says the country's armed forces will "carry out the military-first revolutionary line." In terms of other foreign policy, Article 80 says that the country will grant asylum to foreign nationals who have been persecuted "for struggling for peace and democracy, national independence and socialism or for the freedom of scientific and cultural pursuits." Ultimately, however, as explicated in Articles 100–103 and 109, the chairman of the National Defense Commission (NDC) is the supreme leader of the country, with a term that is the same as members of the Supreme People's Assembly or SPA (five years), as is established in article 90, directing the country's armed forces, and guiding overall state affairs, but is not determined by him alone since he is still accountable to the SPA. Rather, the NDC chairman works to defend the state from external actors. Currently, Kim Jong-un, is the Chairman of the Workers' Party of Korea (WPK), State Chairman of North Korea, and holder of numerous other leadership positions. The Constitution also delineates, in article 117, that the President of SPA Presidium, which can convene this assembly, represents the state and receives "credentials and letters of recall from envoys accredited by other countries." Additionally, the cabinet of the DPRK has the authority to "conclude treaties with foreign countries and conduct external affairs" as noted in Article 125. North Korea is one of the few countries in which the giving of presents still plays a significant role in diplomatic protocol, with Korean Central News Agency (KCNA) reporting from time to time the country's leader received a floral basket or other gift from a foreign leader or organization. During a 2000 visit to Pyongyang, US Secretary of State Madeleine Albright gave North Korean leader Kim Jong-il a basketball signed by Michael Jordan, as he took an interest in NBA basketball. During the 2000 inter-Korean summit, Kim Jong-il made a gift of two Pungsan dogs (associated with the North) to South Korean president Kim Dae-jung. In return, Kim Dae-jung gave two Jindo dogs (associated with the South) to Kim Jong-il. At their Pyongyang summit in 2018, North Korean leader Kim Jong-un gave two Pungsan dogs to South Korean President, Moon Jae-in. North Korea takes its defense seriously, confronting countries they see as threatening their sovereignty, and restricts the activities of foreign diplomats. After 1945, the Soviet Union supplied the economic and military aid that enabled North Korea to mount its invasion of South Korea in 1950. Soviet aid and influence continued at a high level during the Korean war. This was only the beginning of North Korea as governed by the faction which had its roots in an anti-Japanese Korean nationalist movement based in Manchuria and China, with Kim Il-sung participating in this movement and later forming the Workers' Party of Korea (WPK). The assistance of Chinese troops, after 1950, during the war and their presence in the country until 1958 gave China some degree of influence in North Korea. In 1961, North Korea concluded formal mutual security treaties with the Soviet Union and China, which have not been formally ended. In the case of China, Kim Il-sung and Chou En-Lai signed the Sino-North Korean Mutual Aid and Cooperation Friendship Treaty, whereby Communist China pledged to immediately render military and other assistance by all means to its ally against any outside attack. The treaty says, in short that: THE Chairman of the People's Republic of China and the Presidium of the Supreme People's Assembly of the Democratic People's Republic of Korea, determined, in accordance with Marxism–Leninism and the principle of proletarian internationalism and on the basis of mutual respect for state sovereignty and territorial integrity, mutual non-aggression, non-interference in each other's internal affairs, equality and mutual benefit, and mutual assistance and support, to make every effort to further strengthen and develop the fraternal relations of friendship, co-operation and mutual assistance between the People's Republic of China and the Democratic People's Republic of Korea, to jointly guard the security of the two peoples, and to safeguard and consolidate the peace of Asia and the world...[Article II:]The Contracting Parties will continue to make every effort to safeguard the peace of Asia and the world and the security of all peoples...[Article II:] In the event of one of the Contracting Parties being subjected to the armed attack by any state or several states jointly and thus being involved in a state of war, the other Contracting Party shall immediately render military and other assistance by all means at its disposal...[Article V:] The Contracting Parties, on the principles of mutual respect for sovereignty, non-interference in each other's internal affairs, equality and mutual benefit and in the spirit of friendly co-operation, will continue to render each other every possible economic and technical aid in the cause of socialist construction of the two countries and will continue to consolidate and develop economic, cultural, and scientific and technical co-operation between the two countries...[Article VI:] The Contracting Parties hold that the unification of Korea must be realized along peaceful and democratic lines and that such a solution accords exactly with the national interests of the Korean people and the aim of preserving peace in the Far East. This treaty was prolonged twice, in 1981 and 2001, with a validity until 2021. For most of the Cold War, North Korea avoided taking sides in the Sino-Soviet split, but was originally only recognized by countries in the Communist Bloc until 1958 when Algeria recognized it. East Germany was an important source of economic cooperation for North Korea. The East German leader, Erich Honecker, who visited in 1977, was one of Kim Il-sung's closest foreign friends. In 1986, the two countries signed an agreement on military co-operation. Kim was also close to maverick Communist leaders, Josip Broz Tito of Yugoslavia, and Nicolae Ceaușescu of Romania. North Korea began to play a part in the global radical movement, forging ties with such diverse groups as the Black Panther Party of the US, the Workers Party of Ireland, and the African National Congress. As it increasingly emphasized its independence, North Korea began to promote the doctrine of "Juche" ("self-reliance") as an alternative to orthodox Marxism-Leninism and as a model for developing countries to follow. When North-South dialogue started in 1972, North Korea began to receive diplomatic recognition from countries outside the Communist bloc. Within four years, North Korea was recognized by 93 countries, on par with South Korea's 96. North Korea gained entry into the World Health Organization and, as a result, sent its first permanent observer missions to the United Nations (UN). In 1975, it joined the Non-Aligned Movement. During the 1980s, the pace of North Korea's establishment of new diplomatic relations slowed considerably. Following Kim Il-sung's 1984 visit to Moscow, there was a dramatic improvement in Soviet-DPRK relations, resulting in renewed deliveries of advanced Soviet weaponry to North Korea and increases in economic aid. In 1989, as a response to the 1988 Seoul Olympics, North Korea hosted the 13th World Festival of Youth and Students in Pyongyang. South Korea established diplomatic relations with the Soviet Union in 1990 and the People's Republic of China in 1992, which put a serious strain on relations between North Korea and its traditional allies. Moreover, the demise of Communist states in Eastern Europe in 1989 and the disintegration of the Soviet Union in 1991 had resulted in a significant drop in communist aid to North Korea, resulting in largely decreased relations with Russia. Subsequently, South Korea developed the "sunshine policy" towards North Korea, aiming for peaceful Korean reunification. This policy ended in 2009. In September 1991, North Korea became a member of the UN. In July 2000, it began participating in the ASEAN Regional Forum (ARF), as Foreign Minister Paek Nam-sun attended the ARF ministerial meeting in Bangkok 26–27 July. North Korea also expanded its bilateral diplomatic ties in that year, establishing diplomatic relations with Italy, Australia and the Philippines. The United Kingdom established diplomatic relations with North Korea on 13 December 2000, as did Canada in February 2001, followed by Germany and New Zealand on 1 March 2001. In 2006, North Korea test-fired a series of ballistic missiles, after Chinese officials had advised North Korean authorities not to do so. As a result, Chinese authorities publicly rebuked what the west perceives as China's closest ally, and supported the UN Security Council Resolution 1718, which imposed sanctions on North Korea. At other times however, China has blocked United Nations resolutions threatening sanctions against North Korea. In January, 2009, China's paramount leader Hu Jintao and North Korea's supreme leader Kim Jong-il exchanged greetings and declared 2009 as the "year of China-DPRK friendship", marking 60 years of diplomatic relations between the two countries. On 28 November 2010, as part of the United States diplomatic cables leak, WikiLeaks and media partners such as "The Guardian" published details of communications in which Chinese officials referred to North Korea as a "spoiled child" and its nuclear program as "a threat to the whole world's security" while two anonymous Chinese officials claimed there was growing support in Beijing for Korean reunification under the South's government. In August 1971, both North and South Korea agreed to hold talks through their respective Red Cross societies with the aim of reuniting the many Korean families separated following the division of Korea after the Korean War. After a series of secret meetings, both sides announced on 4 July 1972, an agreement to work toward peaceful reunification and an end to the hostile atmosphere prevailing on the peninsula. Dialogue was renewed on several fronts in September 1984, when South Korea accepted the North's offer to provide relief goods to victims of severe flooding in South Korea. In a major initiative in July 1988, South Korean President Roh Tae-woo called for new efforts to promote North-South exchanges, family reunification, inter-Korean trade and contact in international forums. Roh followed up this initiative in a UN General Assembly speech in which South Korea offered to discuss security matters with the North for the first time. In September 1990, the first of eight prime minister-level meetings between officials of North Korea and South Korea took place in Seoul, beginning an especially fruitful period of dialogue. The prime ministerial talks resulted in two major agreements: the Agreement on Reconciliation, Nonaggression, Exchanges, and Cooperation (the "Basic Agreement") and the Declaration on the Denuclearization of the Korean Peninsula (the "Joint Declaration"). The "Joint Declaration" on denuclearization was initiated on 13 December 1991. It forbade both sides to test, manufacture, produce, receive, possess, store, deploy, or use nuclear weapons and forbade the possession of nuclear reprocessing and uranium enrichment facilities. On 30 January 1992, North Korea also signed a nuclear safeguards agreement with the IAEA, as it had pledged to do in 1985 when acceding to the nuclear Non-Proliferation Treaty. This safeguards agreement allowed IAEA inspections to begin in June 1992. As the 1990s progressed, concern over the North's nuclear program became a major issue in North-South relations and between North Korea and the US. By 1998, South Korean President Kim Dae-jung announced a Sunshine Policy towards North Korea. This led in June 2000 to the first Inter-Korean summit, between Kim Dae-jung and Kim Jong-il. In September 2000, the North and South Korean teams marched together at the Sydney Olympics. Trade increased to the point where South Korea became North Korea's largest trading partner. Starting in 1998, the Mount Kumgang Tourist Region was developed as a joint venture between the government of North Korea and Hyundai. In 2003, the Kaesong Industrial Region was established to allow South Korean businesses to invest in the North. In 2007, South Korean President Roh Moo-hyun held talks with Kim Jong-il in Pyongyang. On October 4, 2007, South Korean President Roh and Kim signed a peace declaration. The document called for international talks to replace the which ended the Korean War with a permanent peace treaty. The Sunshine Policy was formally abandoned by subsequent South Korean President Lee Myung-bak in 2010. The Kaesong Industrial Park was closed in 2013, amid tensions about North Korea's nuclear weapons program. It reopened the same year but closed again in 2016. In 2017 Moon Jae-in was elected President of South Korea with promises to return to the Sunshine Policy. In his New Year address for 2018, North Korean leader Kim Jong-un proposed sending a delegation to the upcoming Winter Olympics in South Korea. The Seoul–Pyongyang hotline was reopened after almost two years. North and South Korea marched together in the Olympics opening ceremony and fielded a united women's ice hockey team. North Korea sent an unprecedented high-level delegation, headed by Kim Yo-jong, sister of Kim Jong-un, and President Kim Yong-nam, as well as athletes and performers. On 27 April, the 2018 inter-Korean summit took place between President Moon Jae-in and Kim Jong-un on the South Korean side of the Joint Security Area. It was also the first time since the Korean War that a North Korean leader had entered South Korean territory. The summit ended with both countries pledging to work towards complete denuclearization of the Korean Peninsula. They agreed to work to remove all nuclear weapons from the Korean Peninsula and, within the year, to declare an official end to the Korean War. As part of the Panmunjom Declaration which was signed by leaders of both countries, both sides also called for the end of longstanding military activities in the region of the Korean border and a reunification of Korea. Also, the leaders of the region's two divided states have agreed to work together to connect and modernise their border railways. Moon and Kim met the second time on 26 May. Their second summit was unannounced, held in the North Korean portion of Joint Security Area and concerned Kim's upcoming summit with US President Donald Trump. Trump and Kim met on 12 June 2018 in Singapore and endorsed the Panmunjom Declaration. On June 30, 2019, Kim and Moon met again at the Korean DMZ, this time joined by Trump. During 2019, North Korea conducted a series of short–range missile tests, while the US and South Korea took part in joint military drills in August. On 16 August 2019, North Korea's ruling party made a statement criticizing the South for participating in the drills and for buying US military hardware, calling it a "grave provocation" and saying there would be no more negotiation. North Korea's nuclear research program started with Soviet help in the 1960s, on condition that it joined the Nuclear Non-Proliferation Treaty (NPT). In the 1980s an indigenous nuclear reactor development program started with a small experimental 5 MWe gas-cooled reactor in Yongbyon, with a 50 MWe and 200 MWe reactor to follow. Concerns that North Korea had non-civilian nuclear ambitions were first raised in the late 1980s and almost resulted in their withdrawal from the NPT in 1994. However, the Agreed Framework and the Korean Peninsula Energy Development Organization (KEDO) temporarily resolved this crisis by having the US and several other countries agree that in exchange for dismantling its nuclear program, two light-water reactors (LWRs) would be provided with moves toward normalization of political and economic relations. This agreement started to break down from 2001 because of slow progress on the KEDO light water reactor project and U.S. President George W. Bush's Axis of Evil speech. After continued allegations from the United States, North Korea declared the existence of uranium enrichment programs during a private meeting with American military officials. North Korea withdrew from the Nuclear Non-Proliferation Treaty on 10 January 2003. In 2006, North Korea conducted its first nuclear test. In the third (and last) phase of the fifth round of six-party talks were held on 8 February 2007, and implementation of the agreement reached at the end of the round has been successful according to the requirements of steps to be taken by all six parties within 30 days, and within 60 days after the agreement, including normalization of US-North Korea and Japanese-North Korean diplomatic ties, but on the condition that North Korea ceases to operate its Yongbyon nuclear research centre. North Korea conducted further nuclear tests in 2009, 2013, January and September 2016, and 2017. In 2018, North Korea ceased conducting nuclear and missile tests. Kim Jong-un signed the Panmunjom Declaration committing to "denuclearisation of the Korean Peninsula" and affirmed the same commitment in a subsequent meeting with US President Donald Trump. North Korea is often perceived as the "Hermit kingdom", completely isolated from the rest of the world, but North Korea maintains diplomatic relations with 164 independent states. The country also has bilateral relations with the State of Palestine, the Sahrawi Arab Democratic Republic, and the European Union. North Korea is a member of the following international organizations:
https://en.wikipedia.org/wiki?curid=21264
Northern Ireland Northern Ireland ( ; Ulster-Scots: "") is variously described as a country, province or region which is part of the United Kingdom. Located in the northeast of the island of Ireland, Northern Ireland shares a border to the south and west with the Republic of Ireland. In 2011, its population was 1,810,863, constituting about 30% of the island's total population and about 3% of the UK's population. Established by the Northern Ireland Act 1998 as part of the Good Friday Agreement, the Northern Ireland Assembly (colloquially referred to as Stormont after its location) holds responsibility for a range of devolved policy matters, while other areas are reserved for the British government. Northern Ireland co-operates with the Republic of Ireland in several areas, and the Agreement granted the Republic the ability to "put forward views and proposals" with "determined efforts to resolve disagreements between the two governments". On 11 January 2020, legislators in Northern Ireland formed a government for the first time since the Executive of the 5th Northern Ireland Assembly collapsed in January 2017, following the Renewable Heat Incentive scandal. Northern Ireland was created in 1921, when Ireland was partitioned between Northern Ireland and Southern Ireland by the Government of Ireland Act 1920. Unlike Southern Ireland, which would become the Irish Free State in 1922, the majority of Northern Ireland's population were unionists, who wanted to remain within the United Kingdom. Most of these were the Protestant descendants of colonists from Great Britain. However, a significant minority, mostly Catholics, were nationalists who wanted a united Ireland independent of British rule. Today, the former generally see themselves as British and the latter generally see themselves as Irish, while a distinct Northern Irish or Ulster identity is claimed both by a large minority of Catholics and Protestants and by many of those who are non-aligned. For most of the 20th century, when it came into existence, Northern Ireland was marked by discrimination and hostility between these two sides in what First Minister of Northern Ireland, David Trimble, called a "cold house" for Catholics. In the late 1960s, conflict between state forces and chiefly Protestant unionists on the one hand, and chiefly Catholic nationalists on the other, erupted into three decades of violence known as the Troubles, which claimed over 3,500 lives and injured over 50,000 others. The 1998 Good Friday Agreement was a major step in the peace process, including the decommissioning of weapons and security normalisation, although sectarianism and religious segregation still remain major social problems, and sporadic violence has continued. The economy of Northern Ireland was the most industrialised of Ireland, declining as a result of the political and social turmoil of the Troubles, but economically growing significantly since the late 1990s. The initial growth came from the "peace dividend" and the links which increased trade with the Republic of Ireland, continuing with a significant increase in tourism, investment and business from around the world. Unemployment in Northern Ireland peaked at 17.2% in 1986, dropping to 6.1% and down by 1.2 percentage points over the year, similar to the UK figure of 6.2%. More than 58% of those unemployed had been unemployed for over a year. Cultural links between Northern Ireland, the rest of Ireland, and the rest of the UK are complex, with Northern Ireland sharing both the culture of Ireland and the culture of the United Kingdom. In many sports, the island of Ireland fields a single team, a notable exception being association football. Northern Ireland competes separately at the Commonwealth Games, and people from Northern Ireland may compete for either Great Britain or Ireland at the Olympic Games. The region that is now Northern Ireland was the bedrock of the Irish war of resistance against English programmes of colonialism in the late 16th century. The English-controlled Kingdom of Ireland had been declared by the English king Henry VIII in 1542, but Irish resistance made English control fragmentary. Following Irish defeat at the Battle of Kinsale, though, the region's Gaelic, Roman Catholic aristocracy fled to continental Europe in 1607 and the region became subject to major programmes of colonialism by Protestant English (mainly Anglican) and Scottish (mainly Presbyterian) settlers. A rebellion in 1641 by Irish aristocrats against English rule resulted in a massacre of settlers in Ulster in the context of a war breaking out between England, Scotland and Ireland fuelled by religious intolerance in government. Victories by English forces in that war and further Protestant victories in the Williamite War in Ireland (1688–1691) toward the close of the 17th century solidified Anglican rule in Ireland. In Northern Ireland, the victories of the Siege of Derry (1689) and the Battle of the Boyne (1690) in this latter war are still celebrated by some Protestants (both Anglican and Presbyterian). Popes Innocent XI and Alexander VIII had supported William of Orange instead of his maternal uncle and father-in-law James II, despite William being Protestant and James a Catholic, due to William's participation in alliance with both Protestant and Catholic powers in Europe in wars against Louis XIV (the "Sun King"), the powerful King of France who had been in conflict with the papacy for decades. In 1693, however, Pope Innocent XII recognised James as continuing King of Great Britain and Ireland in place of William, after reconciliation with Louis. In 1695, and contrary to the terms of the Treaty of Limerick (October 1691), a series of penal laws were passed by the Anglican ruling class in Ireland in intense anger at the Pope's recognition of James over William, which was felt to be a betrayal. The intention of the laws was to materially disadvantage the Catholic community and, to a lesser extent, the Presbyterian community. In the context of open institutional discrimination, the 18th century saw secret, militant societies develop in communities in the region and act on sectarian tensions in violent attacks. These events escalated at the end of the century following an event known as the Battle of the Diamond, which saw the supremacy of the Anglican and Presbyterian Peep o'Day Boys over the Catholic Defenders and leading to the formation of the Anglican Orange Order. A rebellion in 1798 led by the cross-community Belfast-based Society of the United Irishmen and inspired by the French Revolution sought to break the constitutional ties between Ireland and Britain and unite Irish people of all religions. Following this, in an attempt to quell sectarianism and force the removal of discriminatory laws (and to prevent the spread of French-style republicanism to Ireland), the government of the Kingdom of Great Britain pushed for the two kingdoms to be merged. The new state, formed in 1801, the United Kingdom of Great Britain and Ireland, was governed from a single government and parliament based in London. Some 250,000 people from Ulster emigrated to the British North American colonies between 1717 and 1775. It is estimated that there are more than 27 million Scotch-Irish Americans now living in the United States, along with many Scotch-Irish Canadians in Canada. During the 19th century, legal reforms started in the late 18th century continued to remove statutory discrimination against Catholics, and progressive programmes enabled tenant farmers to buy land from landlords. By the close of the century, a large and disciplined cohort of Irish Nationalist MPs at Westminster committed the Liberal Party to autonomy—"Home Rule"—for Ireland, a prospect bitterly opposed by Irish Unionists. In 1912, after decades of obstruction from the House of Lords, and with a Liberal government dependent on Nationalist support, Home Rule became a near-certainty. A clash between the House of Commons and House of Lords over a controversial budget produced the Parliament Act 1911, which enabled the veto of the Lords to be overturned. The House of Lords veto had been the unionists' main guarantee that Home Rule would not be enacted because the majority of members of the House of Lords were unionists. In response, opponents to Home Rule, from Conservative and Unionist Party leaders such as Bonar Law and Dublin-based barrister Sir Edward Carson to militant working class unionists in Ireland, threatened the use of violence. In 1914, they smuggled thousands of rifles and rounds of ammunition from Imperial Germany for use by the Ulster Volunteers (UVF), a paramilitary organisation opposed to the implementation of Home Rule. Unionists were in a minority in Ireland as a whole, but in the northern province of Ulster they were a very large majority in County Antrim and County Down, small majorities in County Armagh and County Londonderry and a substantial minority in Ulster's five other counties. The four counties named, along with County Fermanagh and County Tyrone, would later constitute Northern Ireland. Most of the remaining 26 counties which later became the Republic of Ireland were overwhelmingly majority-nationalist. During the Home Rule Crisis, the possibility was discussed of a "temporary" partition of these six counties from the rest of Ireland. In 1914, the Third Home Rule Bill received Royal Assent as the Government of Ireland Act 1914. However, its implementation was suspended before it came into effect because of the outbreak of the First World War, and the Amending Bill to partition Ireland was abandoned. The war was expected to last only a few weeks but in fact, lasted four years. By the end of the war (during which the 1916 Easter Rising had taken place), the Act was seen as unimplementable. Public opinion among nationalists had shifted during the war from a demand for home rule to one for full independence. In 1919, David Lloyd George proposed a new bill be established by the cabinet's "Walter Long Committee" on Ireland, which by adopting findings of his (Lloyd George's) inconclusive 1917-18 Irish Convention would divide Ireland into two Home Rule areas: twenty-six counties being ruled from Dublin and six being ruled from Belfast. Straddling these two areas would be a shared Lord Lieutenant of Ireland who would appoint both governments and a Council of Ireland, which Lloyd George believed would evolve into an all-Ireland parliament. Events overtook the government. The pro-independence Sinn Féin won 73 of the 105 parliamentary seats in Ireland at the general election of 1918, and unilaterally established the First Dáil, an extrajudicial parliament in Ireland. Ireland was partitioned between Northern Ireland and Southern Ireland in 1921, under the terms of Lloyd George's Government of Ireland Act 1920, during the Anglo-Irish War between Irish republican and British forces. A truce was established on 11 July; the war ended on 6 December 1921 with the signing of the Anglo-Irish Treaty, which created the Irish Free State. Under the terms of the treaty, Northern Ireland would become part of the Free State unless the government opted out by presenting an address to the king, although in practice partition remained in place. As expected, the Houses of the Parliament of Northern Ireland resolved on 7 December 1922 (the day after the establishment of the Irish Free State) to exercise its right to opt out of the Free State by making an address to the King. The text of the address was: Shortly afterwards, the Boundary Commission was established to decide on the territorial boundaries between the Irish Free State and Northern Ireland. Owing to the outbreak of civil war in the Free State, the work of the commission was delayed until 1925. Leaders in Dublin expected a substantial reduction in the territory of Northern Ireland, with nationalist areas moving to the Free State. However, the commission's report recommended only that some small portions of land should be ceded from Northern Ireland to the Free State and even that a small amount of land should be ceded from the Free State to Northern Ireland. To prevent argument, this report was suppressed and, in exchange for a waiver to the Free State's obligations to the UK's public debt and the dissolution of the Council of Ireland (sought by the Government of Northern Ireland), the initial six-county border was maintained with no changes. In June 1940, to encourage the neutral Irish state to join with the Allies, British Prime Minister Winston Churchill indicated to the Taoiseach Éamon de Valera that the United Kingdom would push for Irish unity, but believing that Churchill could not deliver, de Valera declined the offer. The British did not inform the Government of Northern Ireland that they had made the offer to the Dublin government, and De Valera's rejection was not publicised until 1970. The Ireland Act 1949 gave the first legal guarantee that the region would not cease to be part of the United Kingdom without the consent of the Parliament of Northern Ireland. The Troubles, which started in the late 1960s, consisted of about 30 years of recurring acts of intense violence during which 3,254 people were killed with over 50,000 casualties. From 1969 to 2003 there were over 36,900 shooting incidents and over 16,200 bombings or attempted bombings associated with The Troubles. The conflict was caused by the disputed status of Northern Ireland within the United Kingdom and the discrimination against the Irish nationalist minority by the dominant unionist majority. From 1967 to 1972 the Northern Ireland Civil Rights Association (NICRA), which modelled itself on the US civil rights movement, led a campaign of civil resistance to anti-Catholic discrimination in housing, employment, policing, and electoral procedures. The franchise for local government elections included only rate-payers and their spouses, and so excluded over a quarter of the electorate. While the majority of disenfranchised electors were Protestant, Catholics were over-represented since they were poorer and had more adults still living in the family home. NICRA's campaign, seen by many unionists as an Irish republican front, and the violent reaction to it, proved to be a precursor to a more violent period. As early as 1969, armed campaigns of paramilitary groups began, including the Provisional IRA campaign of 1969–1997 which was aimed at the end of British rule in Northern Ireland and the creation of a United Ireland, and the Ulster Volunteer Force, formed in 1966 in response to the perceived erosion of both the British character and unionist domination of Northern Ireland. The state security forces – the British Army and the police (the Royal Ulster Constabulary) – were also involved in the violence. The British government's position is that its forces were neutral in the conflict, trying to uphold law and order in Northern Ireland and the right of the people of Northern Ireland to democratic self-determination. Republicans regarded the state forces as combatants in the conflict, pointing to the collusion between the state forces and the loyalist paramilitaries as proof of this. The "Ballast" investigation by the Police Ombudsman has confirmed that British forces, and in particular the RUC, did collude with loyalist paramilitaries, were involved in murder, and did obstruct the course of justice when such claims had been investigated, although the extent to which such collusion occurred is still hotly disputed. As a consequence of the worsening security situation, autonomous regional government for Northern Ireland was suspended in 1972. Alongside the violence, there was a political deadlock between the major political parties in Northern Ireland, including those who condemned violence, over the future status of Northern Ireland and the form of government there should be within Northern Ireland. In 1973, Northern Ireland held a referendum to determine if it should remain in the United Kingdom, or be part of a united Ireland. The vote went heavily in favour (98.9%) of maintaining the status quo. Approximately 57.5% of the total electorate voted in support, but only 1% of Catholics voted following a boycott organised by the Social Democratic and Labour Party (SDLP). The Troubles were brought to an uneasy end by a peace process which included the declaration of ceasefires by most paramilitary organisations and the complete decommissioning of their weapons, the reform of the police, and the corresponding withdrawal of army troops from the streets and from sensitive border areas such as South Armagh and Fermanagh, as agreed by the signatories to the Belfast Agreement (commonly known as the "Good Friday Agreement"). This reiterated the long-held British position, which had never before been fully acknowledged by successive Irish governments, that Northern Ireland will remain within the United Kingdom until a majority of voters in Northern Ireland decides otherwise. The Constitution of Ireland was amended in 1999 to remove a claim of the "Irish nation" to sovereignty over the entire island (in Article 2). The new Articles 2 and 3, added to the Constitution to replace the earlier articles, implicitly acknowledge that the status of Northern Ireland, and its relationships within the rest of the United Kingdom and with the Republic of Ireland, would only be changed with the agreement of a majority of voters in each jurisdiction. This aspect was also central to the Belfast Agreement which was signed in 1998 and ratified by referendums held simultaneously in both Northern Ireland and the Republic. At the same time, the British Government recognised for the first time, as part of the prospective, the so-called "Irish dimension": the principle that the people of the island of Ireland as a whole have the right, without any outside interference, to solve the issues between North and South by mutual consent. The latter statement was key to winning support for the agreement from nationalists. It established a devolved power-sharing government within Northern Ireland, which must consist of both unionist and nationalist parties. These institutions were suspended by the British Government in 2002 after Police Service of Northern Ireland (PSNI) allegations of spying by people working for Sinn Féin at the Assembly (Stormontgate). The resulting case against the accused Sinn Féin member collapsed. On 28 July 2005, the Provisional IRA declared an end to its campaign and has since decommissioned what is thought to be all of its arsenal. This final act of decommissioning was performed in accordance with the Belfast Agreement of 1998 and under the watch of the Independent International Commission on Decommissioning and two external church witnesses. Many unionists, however, remain sceptical. The International Commission later confirmed that the main loyalist paramilitary groups, the UDA, UVF and the Red Hand Commando, had decommissioned what is thought to be all of their arsenals, witnessed by a former archbishop and a former top civil servant. Politicians elected to the Assembly at the 2003 Assembly election were called together on 15 May 2006 under the Northern Ireland Act 2006 for the purpose of electing a First Minister and deputy First Minister of Northern Ireland and choosing the members of an Executive (before 25 November 2006) as a preliminary step to the restoration of devolved government. Following the election held on 7 March 2007, devolved government returned on 8 May 2007 with Democratic Unionist Party (DUP) leader Ian Paisley and Sinn Féin deputy leader Martin McGuinness taking office as First Minister and deputy First Minister, respectively. In its white paper on Brexit the United Kingdom government reiterated its commitment to the Belfast Agreement. With regard to Northern Ireland's status, it said that the UK Government's "clearly-stated preference is to retain Northern Ireland’s current constitutional position: as part of the UK, but with strong links to Ireland". The main political divide in Northern Ireland is between unionists, who wish to see Northern Ireland continue as part of the United Kingdom, and nationalists, who wish to see Northern Ireland unified with the Republic of Ireland, independent from the United Kingdom. These two opposing views are linked to deeper cultural divisions. Unionists are predominantly Ulster Protestant, descendants of mainly Scottish, English, and Huguenot settlers as well as Gaels who converted to one of the Protestant denominations. Nationalists are overwhelmingly Catholic and descend from the population predating the settlement, with a minority from the Scottish Highlands as well as some converts from Protestantism. Discrimination against nationalists under the Stormont government (1921–1972) gave rise to the civil rights movement in the 1960s. While some unionists argue that discrimination was not just due to religious or political bigotry, but also the result of more complex socio-economic, socio-political and geographical factors, its existence, and the manner in which nationalist anger at it was handled, were a major contributing factor to the Troubles. The political unrest went through its most violent phase between 1968 and 1994. In 2007, 36% of the population defined themselves as unionist, 24% as nationalist and 40% defined themselves as neither. According to a 2015 opinion poll, 70% express a long-term preference of the maintenance of Northern Ireland's membership of the United Kingdom (either directly ruled or with devolved government), while 14% express a preference for membership of a united Ireland. This discrepancy can be explained by the overwhelming preference among Protestants to remain a part of the UK (93%), while Catholic preferences are spread across a number of solutions to the constitutional question including remaining a part of the UK (47%), a united Ireland (32%), Northern Ireland becoming an independent state (4%), and those who "don't know" (16%). Official voting figures, which reflect views on the "national question" along with issues of candidate, geography, personal loyalty and historic voting patterns, show 54% of Northern Ireland voters vote for unionist parties, 42% vote for nationalist parties and 4% vote "other". Opinion polls consistently show that the election results are not necessarily an indication of the electorate's stance regarding the constitutional status of Northern Ireland. Most of the population of Northern Ireland are at least nominally Christian, mostly Roman Catholic and Protestant denominations. Many voters (regardless of religious affiliation) are attracted to unionism's conservative policies, while other voters are instead attracted to the traditionally leftist Sinn Féin and SDLP and their respective party platforms for democratic socialism and social democracy. For the most part, Protestants feel a strong connection with Great Britain and wish for Northern Ireland to remain part of the United Kingdom. Many Catholics however, generally aspire to a United Ireland or are less certain about how to solve the constitutional question. In the 2015 survey by Northern Ireland Life and Times, 47% of Northern Irish Catholics supported Northern Ireland remaining a part of the United Kingdom, either by direct rule (6%) or devolved government (41%). Protestants have a slight majority in Northern Ireland, according to the latest Northern Ireland Census. The make-up of the Northern Ireland Assembly reflects the appeals of the various parties within the population. Of the 108 Members of the Legislative Assembly (MLAs), 56 are unionists and 40 are nationalists (the remaining 12 are classified as "other"). Since 1998, Northern Ireland has had devolved government within the United Kingdom, presided over by the Northern Ireland Assembly and a cross-community government (the Northern Ireland Executive). The UK Government and UK Parliament are responsible for reserved and excepted matters. Reserved matters comprise listed policy areas (such as civil aviation, units of measurement, and human genetics) that Parliament may devolve to the Assembly some time in the future. Excepted matters (such as international relations, taxation and elections) are never expected to be considered for devolution. On all other governmental matters, the Executive together with the 90-member Assembly may legislate for and govern Northern Ireland. Devolution in Northern Ireland is dependent upon participation by members of the Northern Ireland executive in the North/South Ministerial Council, which coordinates areas of co-operation (such as agriculture, education and health) between Northern Ireland and the Republic of Ireland. Additionally, "in recognition of the Irish Government's special interest in Northern Ireland", the Government of Ireland and Government of the United Kingdom co-operate closely on non-devolved matters through the British-Irish Intergovernmental Conference. Elections to the Northern Ireland Assembly are by single transferable vote with five Members of the Legislative Assembly (MLAs) elected from each of 18 parliamentary constituencies. In addition, eighteen representatives (Members of Parliament, MPs) are elected to the lower house of the UK parliament from the same constituencies using the first-past-the-post system. However, not all of those elected take their seats. Sinn Féin MPs, currently seven, refuse to take the oath to serve the Queen that is required before MPs are allowed to take their seats. In addition, the upper house of the UK parliament, the House of Lords, currently has some 25 appointed members from Northern Ireland. The Northern Ireland Office represents the UK government in Northern Ireland on reserved matters and represents Northern Ireland's interests within the UK Government. Additionally, the Republic's government also has the right to "put forward views and proposals" on non-devolved matters in relation to Northern Ireland. The Northern Ireland Office is led by the Secretary of State for Northern Ireland, who sits in the Cabinet of the United Kingdom. Northern Ireland is a distinct legal jurisdiction, separate from the two other jurisdictions in the United Kingdom (England and Wales, and Scotland). Northern Ireland law developed from Irish law that existed before the partition of Ireland in 1921. Northern Ireland is a common law jurisdiction and its common law is similar to that in England and Wales. However, there are important differences in law and procedure between Northern Ireland and England and Wales. The body of statute law affecting Northern Ireland reflects the history of Northern Ireland, including Acts of the Parliament of the United Kingdom, the Northern Ireland Assembly, the former Parliament of Northern Ireland and the Parliament of Ireland, along with some Acts of the Parliament of England and of the Parliament of Great Britain that were extended to Ireland under Poynings' Law between 1494 and 1782. There is no generally accepted term to describe what Northern Ireland is: province, region, country or something else. The choice of term can be controversial and can reveal the writer's political preferences. This has been noted as a problem by several writers on Northern Ireland, with no generally recommended solution. Owing in part to the way in which the United Kingdom, and Northern Ireland, came into being, there is no legally defined term to describe what Northern Ireland 'is'. There is also no uniform or guiding way to refer to Northern Ireland amongst the agencies of the UK government. For example, the websites of the Office of the Prime Minister of the United Kingdom and the UK Statistics Authority describe the United Kingdom as being made up of four countries, one of these being Northern Ireland. Other pages on the same websites refer to Northern Ireland specifically as a "province" as do publications of the UK Statistics Authority. The website of the Northern Ireland Statistics and Research Agency also refers to Northern Ireland as being a province as does the website of the Office of Public Sector Information and other agencies within Northern Ireland. Publications of HM Treasury and the Department of Finance and Personnel of the Northern Ireland Executive, on the other hand, describe Northern Ireland as being a "region of the UK". The UK's submission to the 2007 United Nations Conference on the Standardization of Geographical Names defines the UK as being made up of two countries (England and Scotland), one principality (Wales) and one province (Northern Ireland). Unlike England, Scotland and Wales, Northern Ireland has no history of being an independent country or of being a nation in its own right. Some writers describe the United Kingdom as being made up of three countries and one province or point out the difficulties with calling Northern Ireland a country. Authors writing specifically about Northern Ireland dismiss the idea that Northern Ireland is a "country" in general terms, and draw contrasts in this respect with England, Scotland and Wales. Even for the period covering the first 50 years of Northern Ireland's existence, the term "country" is considered inappropriate by some political scientists on the basis that many decisions were still made in London. The absence of a distinct nation of Northern Ireland, separate within the island of Ireland, is also pointed out as being a problem with using the term and is in contrast to England, Scotland, and Wales. Many commentators prefer to use the term "province", although that is also not without problems. It can arouse irritation, particularly among nationalists, for whom the title province is properly reserved for the traditional province of Ulster, of which Northern Ireland comprises six out of nine counties. The BBC style guide is to refer to Northern Ireland as a province, and use of the term is common in literature and newspaper reports on Northern Ireland and the United Kingdom. Some authors have described the meaning of this term as being equivocal: referring to Northern Ireland as being a province both of the United Kingdom and of the traditional country of Ireland. "Region" is used by several UK government agencies and the European Union. Some authors choose this word but note that it is "unsatisfactory". Northern Ireland can also be simply described as "part of the UK", including by UK government offices. Many people inside and outside Northern Ireland use other names for Northern Ireland, depending on their point of view. Disagreement on names, and the reading of political symbolism into the use or non-use of a word, also attaches itself to some urban centres. The most notable example is whether Northern Ireland's second city should be called "Derry" or "Londonderry". Choice of language and nomenclature in Northern Ireland often reveals the cultural, ethnic and religious identity of the speaker. Those who do not belong to any group but lean towards one side often tend to use the language of that group. Supporters of unionism in the British media (notably "The Daily Telegraph" and the "Daily Express") regularly call Northern Ireland "Ulster". Some media outlets in the Republic use "North of Ireland", "the North", or (less often) the "Six Counties". Government and cultural organisations in Northern Ireland often use the word "Ulster" in their title; for example, the University of Ulster, the Ulster Museum, the Ulster Orchestra, and BBC Radio Ulster. Although some news bulletins since the 1990s have opted to avoid all contentious terms and use the official name, Northern Ireland, the term "the North" remains commonly used by broadcast media in the Republic. British Embassy Ashgabat local nickname used to refer to Northern Ireland, derived from the pronunciation of the words "Northern Ireland" in an exaggerated Ulster accent (particularly one from the greater Belfast area). The phrase is seen as a lighthearted way to refer to Northern Ireland, based as it is on regional pronunciation. It often refers to the Northern Ireland national football team. Northern Ireland was covered by an ice sheet for most of the last ice age and on numerous previous occasions, the legacy of which can be seen in the extensive coverage of drumlins in Counties Fermanagh, Armagh, Antrim and particularly Down. The centrepiece of Northern Ireland's geography is Lough Neagh, at the largest freshwater lake both on the island of Ireland and in the British Isles. A second extensive lake system is centred on Lower and Upper Lough Erne in Fermanagh. The largest island of Northern Ireland is Rathlin, off the north Antrim coast. Strangford Lough is the largest inlet in the British Isles, covering . There are substantial uplands in the Sperrin Mountains (an extension of the Caledonian mountain belt) with extensive gold deposits, granite Mourne Mountains and basalt Antrim Plateau, as well as smaller ranges in South Armagh and along the Fermanagh–Tyrone border. None of the hills are especially high, with Slieve Donard in the dramatic Mournes reaching , Northern Ireland's highest point. Belfast's most prominent peak is Cavehill. The volcanic activity which created the Antrim Plateau also formed the eerily geometric pillars of the Giant's Causeway on the north Antrim coast. Also in north Antrim are the Carrick-a-Rede Rope Bridge, Mussenden Temple and the Glens of Antrim. The Lower and Upper River Bann, River Foyle and River Blackwater form extensive fertile lowlands, with excellent arable land also found in North and East Down, although much of the hill country is marginal and suitable largely for animal husbandry. The valley of the River Lagan is dominated by Belfast, whose metropolitan area includes over a third of the population of Northern Ireland, with heavy urbanisation and industrialisation along the Lagan Valley and both shores of Belfast Lough. The vast majority of Northern Ireland has a temperate maritime climate, ("Cfb" in the Koeppen climate classification) rather wetter in the west than the east, although cloud cover is very common across the region. The weather is unpredictable at all times of the year, and although the seasons are distinct, they are considerably less pronounced than in interior Europe or the eastern seaboard of North America. Average daytime maximums in Belfast are in January and in July. The highest maximum temperature recorded was at Knockarevan, near Garrison, County Fermanagh on 30 June 1976 and at Belfast on 12 July 1983. The lowest minimum temperature recorded was at Castlederg, County Tyrone on 23 December 2010. Northern Ireland is the least forested part of the United Kingdom and Ireland, and one of the least forested parts of Europe. Until the end of the Middle Ages, the land was heavily forested with native trees such as oak, ash, hazel, birch, alder, willow, aspen, elm, rowan, yew and Scots pine. Today, only 8% of Northern Ireland is woodland, and most of this is non-native conifer plantations. Northern Ireland consists of six historic counties: County Antrim, County Armagh, County Down, County Fermanagh, County Londonderry, County Tyrone. These counties are no longer used for local government purposes; instead, there are eleven districts of Northern Ireland which have different geographical extents. These were created in 2015, replacing the twenty-six districts which previously existed. Although counties are no longer used for local governmental purposes, they remain a popular means of describing where places are. They are officially used while applying for an Irish passport, which requires one to state one's county of birth. The name of that county then appears in both Irish and English on the passport's information page, as opposed to the town or city of birth on the United Kingdom passport. The Gaelic Athletic Association still uses the counties as its primary means of organisation and fields representative teams of each GAA county. The original system of car registration numbers largely based on counties still remains in use. In 2000, the telephone numbering system was restructured into an 8 digit scheme with (except for Belfast) the first digit approximately reflecting the county. The county boundaries still appear on Ordnance Survey of Northern Ireland Maps and the Phillips Street Atlases, among others. With their decline in official use, there is often confusion surrounding towns and cities which lie near county boundaries, such as Belfast and Lisburn, which are split between counties Down and Antrim (the majorities of both cities, however, are in Antrim). In March 2018, "The Sunday Times" published its list of Best Places to Live in Britain, including the following places in Northern Ireland: Ballyhackamore near Belfast (overall best for Northern Ireland), Holywood, County Down, Newcastle, County Down, Portrush, County Antrim, Strangford, County Down. Northern Ireland has traditionally had an industrial economy, most notably in shipbuilding, rope manufacture and textiles, but most heavy industry has since been replaced by services, primarily the public sector. Seventy percent of the economy's revenue comes from the service sector. Apart from the public sector, another important service sector is tourism, which rose to account for over 1% of the economy's revenue in 2004. Tourism has been a major growth area since the end of the Troubles. Key tourism attractions include the historic cities of Derry, Belfast and Armagh and the many castles in Northern Ireland. These large firms are attracted by government subsidies and the skilled workforce in Northern Ireland. The local economy has seen contraction during the Great Recession. In response, the Northern Ireland Assembly has sent trade missions abroad. The Executive wishes to gain taxation powers from London, to align Northern Ireland's corporation tax rate with the unusually low rate of the Republic of Ireland. Northern Ireland has underdeveloped transport infrastructure, with most infrastructure concentrated around Greater Belfast, Greater Derry and Craigavon. Northern Ireland is served by three airports – Belfast International near Antrim, George Best Belfast City integrated into the railway network at Sydenham in East Belfast, and City of Derry in County Londonderry. Major seaports at Larne and Belfast carry passengers and freight between Great Britain and Northern Ireland. Passenger railways are operated by Northern Ireland Railways. With Iarnród Éireann (Irish Rail), Northern Ireland Railways co-operates in providing the joint Enterprise service between Dublin Connolly and Lanyon Place. The whole of Ireland has a mainline railway network with a gauge of, which is unique in Europe and has resulted in distinct rolling stock designs. The only preserved line of this gauge is the Downpatrick and County Down Railway, which operates steam and diesel locomotives. Main railway lines linking to and from Belfast Great Victoria Street railway station and Lanyon Place railway station are: Main motorways are: The cross-border road connecting the ports of Larne in Northern Ireland and Rosslare Harbour in the Republic of Ireland is being upgraded as part of an EU-funded scheme. European route E01 runs from Larne through the island of Ireland, Spain and Portugal to Seville. The population of Northern Ireland has risen yearly since 1978. The population in 2011 was 1.8 million, having grown 7.5% over the previous decade from just under 1.7 million in 2001. This constitutes just under 3% of the population of the UK (62 million) and just over 28% of the population of the island of Ireland (6.3 million). The population of Northern Ireland is almost entirely white (98.2%). In 2011, 88.8% of the population were born in Northern Ireland, with 4.5% born elsewhere in Britain, and 2.9% born in the Republic of Ireland. 4.3% were born elsewhere; triple the amount there were in 2001. Most are from Eastern Europe and Lithuania and Latvia. The largest non-white ethnic groups were Chinese (6,300) and Indian (6,200). Black people of various origins made up 0.2% of the 2011 population and people of mixed ethnicity made up 0.2%. At the 2011 census, 41.5% of the population identified as Protestant/non-Roman Catholic Christian, 41% as Roman Catholic, and 0.8% as non-Christian, while 17% identified with no religion or did not state one. The biggest of the Protestant/non-Roman Catholic Christian denominations were the Presbyterian Church (19%), the Church of Ireland (14%) and the Methodist Church (3%). In terms of community background (i.e. religion or religion brought up in), 48% of the population came from a Protestant background, 45% from a Catholic background, 0.9% from non-Christian backgrounds, and 5.6% from non-religious backgrounds. In the 2011 census in Northern Ireland respondents gave their national identity as follows. Several studies and surveys carried out between 1971 and 2006 have indicated that, in general, most Protestants in Northern Ireland see themselves primarily as British, whereas a majority of Roman Catholics regard themselves primarily as Irish. This does not, however, account for the complex identities within Northern Ireland, given that many of the population regard themselves as "Ulster" or "Northern Irish", either as a primary or secondary identity. Overall, the Catholic population is somewhat more ethnically diverse than the more homogeneous Protestant population. 83.1% of Protestants identified as "British" or with a British ethnic group (English, Scottish, or Welsh) in the 2011 Census, whereas only 3.9% identified as "Irish". Meanwhile, 13.7% of Catholics identified as "British" or with a British ethnic group. A further 4.4% identified as "all other", which are largely immigrants, for example from Poland. A 2008 survey found that 57% of Protestants described themselves as British, while 32% identified as Northern Irish, 6% as Ulster and 4% as Irish. Compared to a similar survey carried out in 1998, this shows a fall in the percentage of Protestants identifying as British and Ulster and a rise in those identifying as Northern Irish. The 2008 survey found that 61% of Catholics described themselves as Irish, with 25% identifying as Northern Irish, 8% as British and 1% as Ulster. These figures were largely unchanged from the 1998 results. People born in Northern Ireland are, with some exceptions, deemed by UK law to be citizens of the United Kingdom. They are also, with similar exceptions, entitled to be citizens of Ireland. This entitlement was reaffirmed in the 1998 Good Friday Agreement between the British and Irish governments, which provides that: ...it is the birthright of all the people of Northern Ireland to identify themselves and be accepted as Irish or British, or both, as they may so choose, and accordingly [the two governments] confirm that their right to hold both British and Irish citizenship is accepted by both Governments and would not be affected by any future change in the status of Northern Ireland. As a result of the Agreement, the Constitution of the Republic of Ireland was amended. The current wording provides that people born in Northern Ireland are entitled to be Irish citizens on the same basis as people from any other part of the island. Neither government, however, extends its citizenship to all persons born in Northern Ireland. Both governments exclude some people born in Northern Ireland, in particular persons born without one parent who is a British or Irish citizen. The Irish restriction was given effect by the twenty-seventh amendment to the Irish Constitution in 2004. The position in UK nationality law is that most of those born in Northern Ireland are UK nationals, whether or not they so choose. Renunciation of British citizenship requires the payment of a fee, currently £372. In the 2011 census in Northern Ireland respondents stated that they held the following passports. English is spoken as a first language by almost all of the Northern Ireland population. It is the "de facto" official language and the Administration of Justice (Language) Act (Ireland) 1737 prohibits the use of languages other than English in legal proceedings. Under the Good Friday Agreement, Irish and Ulster Scots (an Ulster dialect of the Scots language, sometimes known as "Ullans"), are recognised as "part of the cultural wealth of Northern Ireland". Two all-island bodies for the promotion of these were created under the Agreement: "Foras na Gaeilge", which promotes the Irish language, and the Ulster Scots Agency, which promotes the Ulster Scots dialect and culture. These operate separately under the aegis of the North/South Language Body, which reports to the North/South Ministerial Council. The British government in 2001 ratified the European Charter for Regional or Minority Languages. Irish (in Northern Ireland) was specified under Part III of the Charter, with a range of specific undertakings in relation to education, translation of statutes, interaction with public authorities, the use of placenames, media access, support for cultural activities and other matters. A lower level of recognition was accorded to Ulster Scots, under Part II of the Charter. The dialect of English spoken in Northern Ireland shows influence from the lowland Scots language. There are supposedly some minute differences in pronunciation between Protestants and Catholics, for instance; the name of the letter "h", which Protestants tend to pronounce as "aitch", as in British English, and Catholics tend to pronounce as "haitch", as in Hiberno-English. However, geography is a much more important determinant of dialect than religious background. The Irish language (), or "Gaelic", is a native language of Ireland. It was spoken predominantly throughout what is now Northern Ireland before the Ulster Plantations in the 17th century and most place names in Northern Ireland are anglicised versions of a Gaelic name. Today, the language is often associated with Irish nationalism (and thus with Catholics). However, in the 19th century, the language was seen as a common heritage, with Ulster Protestants playing a leading role in the Gaelic revival. In the 2011 census, 11% of the population of Northern Ireland claimed "some knowledge of Irish" and 3.7% reported being able to "speak, read, write and understand" Irish. In another survey, from 1999, 1% of respondents said they spoke it as their main language at home. The dialect spoken in Northern Ireland, Ulster Irish, has two main types, East Ulster Irish and Donegal Irish (or West Ulster Irish), is the one closest to Scottish Gaelic (which developed into a separate language from Irish Gaelic in the 17th century). Some words and phrases are shared with Scots Gaelic, and the dialects of east Ulster – those of Rathlin Island and the Glens of Antrim – were very similar to the dialect of Argyll, the part of Scotland nearest to Ireland. And those dialects of Armagh and Down were also very similar to the dialects of Galloway. Use of the Irish language in Northern Ireland today is politically sensitive. The erection by some district councils of bilingual street names in both English and Irish, invariably in predominantly nationalist districts, is resisted by unionists who claim that it creates a "chill factor" and thus harms community relationships. Efforts by members of the Northern Ireland Assembly to legislate for some official uses of the language have failed to achieve the required cross-community support, and the UK government has declined to legislate. There has recently been an increase in interest in the language among unionists in East Belfast. Ulster Scots comprises varieties of the Scots language spoken in Northern Ireland. For a native English speaker, "[Ulster Scots] is comparatively accessible, and even at its most intense can be understood fairly easily with the help of a glossary." Along with the Irish language, the Good Friday Agreement recognised the dialect as part of Northern Ireland's unique culture and the St Andrews Agreement recognised the need to "enhance and develop the Ulster Scots language, heritage and culture". Approximately 2% of the population claim to speak Ulster Scots. However, the number speaking it as their main language in their home is negligible, with only 0.9% of 2011 census respondents claiming to be able to speak, read, write and understand Ulster-Scots. 8.1% professed to have "some ability" however. The most common sign language in Northern Ireland is Northern Ireland Sign Language (NISL). However, because in the past Catholic families tended to send their deaf children to schools in Dublin where Irish Sign Language (ISL) is commonly used, ISL is still common among many older deaf people from Catholic families. Irish Sign Language (ISL) has some influence from the French family of sign language, which includes American Sign Language (ASL). NISL takes a large component from the British family of sign language (which also includes Auslan) with many borrowings from ASL. It is described as being related to Irish Sign Language at the syntactic level while much of the lexicon is based on British Sign Language (BSL). Northern Ireland shares both the culture of Ireland and the culture of the United Kingdom. Parades are a prominent feature of Northern Ireland society, more so than in the rest of Ireland or in Britain. Most are held by Protestant fraternities such as the Orange Order, and Ulster loyalist marching bands. Each summer, during the "marching season", these groups have hundreds of parades, deck streets with British flags, bunting and specially-made arches, and light large towering bonfires. The biggest parades are held on 12 July (The Twelfth). There is often tension when these activities take place near Catholic neighbourhoods, which sometimes leads to violence. Since the end of the Troubles, Northern Ireland has witnessed rising numbers of tourists. Attractions include cultural festivals, musical and artistic traditions, countryside and geographical sites of interest, public houses, welcoming hospitality and sports (especially golf and fishing). Since 1987 public houses have been allowed to open on Sundays, despite some opposition. The Ulster Cycle is a large body of prose and verse centring on the traditional heroes of the Ulaid in what is now eastern Ulster. This is one of the four major cycles of Irish mythology. The cycle centres on the reign of Conchobar mac Nessa, who is said to have been king of Ulster around the 1st century. He ruled from Emain Macha (now Navan Fort near Armagh), and had a fierce rivalry with queen Medb and king Ailill of Connacht and their ally, Fergus mac Róich, former king of Ulster. The foremost hero of the cycle is Conchobar's nephew Cúchulainn, who features in the epic prose/poem "An Táin Bó Cúailnge" (The Cattle Raid of Cooley, a "cassus belli" between Ulster and Connaught). Northern Ireland comprises a patchwork of communities whose national loyalties are represented in some areas by flags flown from flagpoles or lamp posts. The Union Jack and the former Northern Ireland flag are flown in many loyalist areas, and the Tricolour, adopted by republicans as the flag of Ireland in 1916, is flown in some republican areas. Even kerbstones in some areas are painted red-white-blue or green-white-orange, depending on whether local people express unionist/loyalist or nationalist/republican sympathies. The official flag is that of the state having sovereignty over the territory, i.e. the Union Flag. The former Northern Ireland flag, also known as the "Ulster Banner" or "Red Hand Flag", is a banner derived from the coat of arms of the Government of Northern Ireland until 1972. Since 1972, it has had no official status. The Union Flag and the Ulster Banner are used exclusively by unionists. UK flags policy states that in Northern Ireland, "The Ulster flag and the Cross of St Patrick have no official status and, under the Flags Regulations, are not permitted to be flown from Government Buildings." The Irish Rugby Football Union and the Church of Ireland have used the Saint Patrick's Saltire or "Cross of St Patrick". This red saltire on a white field was used to represent Ireland in the flag of the United Kingdom. It is still used by some British army regiments. Foreign flags are also found, such as the Palestinian flags in some nationalist areas and Israeli flags in some unionist areas. The United Kingdom national anthem of "God Save the Queen" is often played at state events in Northern Ireland. At the Commonwealth Games and some other sporting events, the Northern Ireland team uses the Ulster Banner as its flag—notwithstanding its lack of official status—and the "Londonderry Air" (usually set to lyrics as "Danny Boy"), which also has no official status, as its national anthem. The national football team also uses the Ulster Banner as its flag but uses "God Save The Queen" as its anthem. Major Gaelic Athletic Association matches are opened by the national anthem of the Republic of Ireland, "Amhrán na bhFiann (The Soldier's Song)", which is also used by most other all-Ireland sporting organisations. Since 1995, the Ireland rugby union team has used a specially commissioned song, "Ireland's Call" as the team's anthem. The Irish national anthem is also played at Dublin home matches, being the anthem of the host country. Northern Irish murals have become well-known features of Northern Ireland, depicting past and present events and documenting peace and cultural diversity. Almost 2,000 murals have been documented in Northern Ireland since the 1970s. In Northern Ireland, sport is popular and important in the lives of many people. Sports tend to be organised on an all-Ireland basis, with a single team for the whole island. The most notable exception is association football, which has separate governing bodies for each jurisdiction. The Irish Football Association (IFA) serves as the organising body for association football in Northern Ireland, with the Northern Ireland Football League (NIFL) responsible for the independent administration of the three divisions of national domestic football, as well as the Northern Ireland Football League Cup. The highest level of competition within Northern Ireland are the NIFL Premiership and the NIFL Championship. However, many players from Northern Ireland compete with clubs in England and Scotland. NIFL clubs are semi-professional or Intermediate.NIFL Premiership clubs are also eligible to compete in the UEFA Champions League and UEFA Europa League with the league champions entering the Champions league second qualifying round and the 2nd placed league finisher, the European play-off winners and the Irish Cup winners entering the Europa League second qualifying round. No clubs have ever reached the group stage. Despite Northern Ireland's small population, the national team qualified for the World Cup in 1958, 1982 and 1986, making it to the quarter-finals in 1958 and 1982 and made it the first knockout round in the European Championships in 2016. The six counties of Northern Ireland are among the nine governed by the Ulster branch of the Irish Rugby Football Union, the governing body of rugby union in Ireland. Ulster is one of the four professional provincial teams in Ireland and competes in the Pro14 and European Cup. It won the European Cup in 1999. In international competitions, the Ireland national rugby union team's recent successes include four Triple Crowns between 2004 and 2009 and a Grand Slam in 2009 in the Six Nations Championship. Northern Ireland plays as the Ireland cricket team which represents both Northern Ireland and Republic of Ireland. The Ireland Cricket team is a full member of the International Cricket Council, having been granted Test status and full membership (along with Afghanistan) by the ICC in June 2017. They are currently able to compete in Test cricket, the highest level of competitive cricket in the international arena and they are one of the twelve full-member countries under the ICC. Ireland is the current champion of the ICC Intercontinental Cup. One of Ireland's regular international venues is Stormont in Belfast. Gaelic games include Gaelic football, hurling (and camogie), handball and rounders. Of the four, football is the most popular in Northern Ireland. Players play for local clubs with the best being selected for their county teams. The Ulster GAA is the branch of the Gaelic Athletic Association that is responsible for the nine counties of Ulster, which include the six of Northern Ireland. These nine county teams participate in the Ulster Senior Football Championship, Ulster Senior Hurling Championship, All-Ireland Senior Football Championship and All-Ireland Senior Hurling Championship. Recent successes for Northern Ireland teams include Armagh's 2002 All-Ireland Senior Football Championship win and Tyrone's wins in 2003, 2005 and 2008. Perhaps Northern Ireland's most notable successes in professional sport have come in golf. Northern Ireland has contributed more major champions in the modern era than any other European country, with three in the space of just 14 months from the US Open in 2010 to The Open Championship in 2011. Notable golfers include Fred Daly (winner of The Open in 1947), Ryder Cup players Ronan Rafferty and David Feherty, leading European Tour professionals David Jones, Michael Hoey (a winner on Tour in 2011) and Gareth Maybin, as well as three recent major winners Graeme McDowell (winner of the US Open in 2010, the first European to do so since 1970), Rory McIlroy (winner of four majors) and Darren Clarke (winner of The Open in 2011). Northern Ireland has also contributed several players to the Great Britain and Ireland Walker Cup team, including Alan Dunbar and Paul Cutler who played on the victorious 2011 team in Scotland. The Golfing Union of Ireland, the governing body for men's and boy's amateur golf throughout Ireland and the oldest golfing union in the world, was founded in Belfast in 1891. Northern Ireland's golf courses include the Royal Belfast Golf Club (the earliest, formed in 1881), Royal Portrush Golf Club, which is the only course outside Great Britain to have hosted The Open Championship, and Royal County Down Golf Club ("Golf Digest" magazine's top-rated course outside the United States). Northern Ireland has produced two world snooker champions; Alex Higgins, who won the title in 1972 and 1982, and Dennis Taylor, who won in 1985. The highest-ranked Northern Ireland professional on the world circuit presently is Mark Allen from Antrim. The sport is governed locally by the Northern Ireland Billiards and Snooker Association who run regular ranking tournaments and competitions. Motorcycle racing is a particularly popular sport during the summer months, with the main meetings of the season attracting some of the largest crowds to any outdoor sporting event in the whole of Ireland. Two of the three major international road race meetings are held in Northern Ireland, these being the North West 200 and the Ulster Grand Prix. In addition racing on purpose built circuits take place at Kirkistown and Bishop's Court, whilst smaller road race meetings are held such as the Cookstown 100, the Armoy Road Races and the Tandragee 100 all of which form part of the Irish National Road Race Championships and which have produced some of the greatest motorcycle racers in the history of the sport, notably Joey Dunlop. Although Northern Ireland lacks an international automobile racecourse, two Northern Irish drivers have finished inside the top two of Formula One, with John Watson achieving the feat in 1982 and Eddie Irvine doing the same in 1999. The largest course and the only MSA-licensed track for UK-wide competition is Kirkistown. The Ireland national rugby league team has participated in the Emerging Nations Tournament (1995), the Super League World Nines (1996), the World Cup (2000 and 2008), European Nations Cup (since 2003) and Victory Cup (2004). The Ireland A rugby league team compete annually in the Amateur Four Nations competition (since 2002) and the St Patrick's Day Challenge (since 1995). In 2007, after the closure of UCW (Ulster Championship Wrestling) which was a wrestling promotion, PWU formed, standing for Pro Wrestling Ulster. The wrestling promotion features championships, former WWE superstars and local independent wrestlers. Events and IPPV's throughout Northern Ireland. Unlike most areas of the United Kingdom, in the last year of primary school, many children sit entrance examinations for grammar schools. Integrated schools, which attempt to ensure a balance in enrolment between pupils of Protestant, Roman Catholic and other faiths (or none), are becoming increasingly popular, although Northern Ireland still has a primarily "de facto" religiously segregated education system. In the primary school sector, 40 schools (8.9% of the total number) are integrated schools and 32 (7.2% of the total number) are Irish language-medium schools. The main universities in Northern Ireland are Queen's University Belfast and Ulster University, and the distance learning Open University which has a regional office in Belfast. 356 species of marine algae have been recorded in the north-east of Ireland. As Counties Londonderry, Antrim and Down are the only three counties of Northern Ireland with a shoreline this will apply to all Northern Ireland. 77 species are considered rare having been recorded rarely. The BBC has a division called BBC Northern Ireland with headquarters in Belfast. As well as broadcasting standard UK-wide programmes, BBC NI produces local content, including a news break-out called BBC Newsline. The ITV franchise in Northern Ireland is Ulster Television (UTV). The state-owned Channel 4 and the privately owned Channel 5 also broadcast in Northern Ireland. Access is available to satellite and cable services. All Northern Ireland viewers must obtain a UK TV licence to watch live television transmissions. RTÉ, the national broadcaster of the Republic of Ireland, is available over the air to most parts of Northern Ireland via reception overspill and via satellite and cable. Since the digital TV switchover, RTÉ One, RTÉ2 and the Irish-language channel TG4, are now available over the air on the UK's Freeview system from transmitters within Northern Ireland. Although they are transmitted in standard definition, a Freeview HD box or television is required for reception. As well as the standard UK-wide radio stations from the BBC, Northern Ireland is home to many local radio stations, such as Cool FM, CityBeat, and Q102.9. The BBC has two regional radio stations which broadcast in Northern Ireland, BBC Radio Ulster and BBC Radio Foyle. Besides the UK and Irish national newspapers, there are three main regional newspapers published in Northern Ireland. These are the "Belfast Telegraph", the "Irish News" and the "News Letter". According to the Audit Bureau of Circulations (UK) the average daily circulation for these three titles in 2018 was: Northern Ireland uses the same telecommunications and postal services as the rest of the United Kingdom at standard domestic rates and there are no mobile roaming charges between Great Britain and Northern Ireland. People in Northern Ireland who live close to the border with the Republic of Ireland may inadvertently switch over to the Irish mobile networks, causing international roaming fees to be applied. Calls from landlines in Northern Ireland to numbers in the Republic of Ireland are charged at the same rate as those to numbers in Great Britain, while landline numbers in Northern Ireland can similarly be called from the Republic of Ireland at domestic rates, using the 048 prefix.
https://en.wikipedia.org/wiki?curid=21265
Nasreddin Nasreddin or Nasreddin Hodja or Mullah Nasreddin Hooja () or Mullah Nasruddin was a Seljuq satirist, born in Hortu Village in Sivrihisar, Eskişehir Province, present-day Turkey and died in 13th century in Akşehir, near Konya, a capital of the Seljuk Sultanate of Rum, in today's Turkey. He is considered a populist philosopher, Sufi and wise man, remembered for his funny stories and anecdotes. He appears in thousands of stories, sometimes witty, sometimes wise, but often, too, a fool or the butt of a joke. A Nasreddin story usually has a subtle humour and a pedagogic nature. The International Nasreddin Hodja festival is celebrated between the 5 and 10 July in his hometown every year. Claims about his origin are made by many ethnic groups. Many sources give the birthplace of Nasreddin as Hortu Village in Sivrihisar, Eskişehir Province, present-day Turkey, in the 13th century, after which he settled in Akşehir, and later in Konya under the Seljuq rule, where he died in 1275/6 or 1285/6 CE. The alleged tomb of Nasreddin is in Akşehir and the ""International Nasreddin Hodja Festival"" is held annually in Akşehir between 5–10 July. According to Prof. Mikail Bayram who made an extensive research on Nasreddin, his full name is Nasir ud-din Mahmood al-Khoyi, his title Ahi Evran (as being the leader of the ahi organization). According to him, Nasreddin was born in the city of Khoy in West Azerbaijan Province of Iran, had his education in Khorasan and became the pupil of famous Quran mufassir Fakhr al-Din al-Razi in Herat. He was sent to Anatolia by the Khalif in Baghdad to organize resistance and uprising against the Mongol invasion. He served as a kadı (an Islamic judge and ombudsman) in Kayseri. This explains why he addresses judicial problems in the jokes not only religious ones. During the turmoil of the Mongol invasion he became a political opponent of Persian Rumi. He was addressed in Masnavi by juha anecdotes for this reason. He became the vazir at the court of Kaykaus II. Having lived in numerous cities in vast area and being steadfastly against the Mongol invasion as well as having his witty character, he was embraced by various nations and cultures from Turkey to Arabia, from Persia to Afghanistan, and from Russia to China, most of which suffered from those invasions. The Arabic version of the character, known as ""juha"" (), is the oldest attested version of the character and the most divergent, being mentioned in Al-Jahiz's book ""Saying on Mules""— —, according Al-Dhahabi's book Al-Dhahabi's book "", his full name was ""Abu al-Ghusn Dujayn al-Fizari"", he lived under the Umayyads in Kufa, his mother was said to be a servant to Anas ibn Malik, thus he was one of the Tabi'un in Sunni tradition. As generations have gone by, new stories have been added to the Nasreddin corpus, others have been modified, and he and his tales have spread to many regions. The themes in the tales have become part of the folklore of a number of nations and express the national imaginations of a variety of cultures. Although most of them depict Nasreddin in an early small-village setting, the tales deal with concepts that have a certain timelessness. They purvey a pithy folk wisdom that triumphs over all trials and tribulations. The oldest manuscript of Nasreddin dates to 1571. Today, Nasreddin stories are told in a wide variety of regions, especially across the Muslim world and have been translated into many languages. Some regions independently developed a character similar to Nasreddin, and the stories have become part of a larger whole. In many regions, Nasreddin is a major part of the culture, and is quoted or alluded to frequently in daily life. Since there are thousands of different Nasreddin stories, one can be found to fit almost any occasion. Nasreddin often appears as a whimsical character of a large Turkish, Persian, Albanian, Armenian, Azerbaijani, Bengali, Bosnian, Bulgarian, Chinese, Greek, Gujarati, Hindi, Judeo-Spanish, Kurdish, Romanian, Serbian, Russian, and Urdu folk tradition of vignettes, not entirely different from zen koans. 1996–1997 was declared International Nasreddin Year by UNESCO. Many peoples of the Near, Middle East, South Asia and Central Asia claim Nasreddin as their own ("e.g.", Turks, Afghans, Iranians, and Uzbeks). His name is spelt in a wide variety of ways: "Nasrudeen", "Nasrudin", "Nasruddin", "Nasr ud-Din", "Nasredin", "Nasiruddin," "Naseeruddin", "Nasr Eddin", "Nastradhin", "Nasreddine", "Nastratin", "Nusrettin", "Nasrettin", "Nostradin", "Nastradin" (lit.: Victory of the Deen) and "Nazaruddin". It is sometimes preceded or followed by a title or honorific used in the corresponding cultures: "Hoxha", "Khwaje", "Hodja", "Hoja", "Hojja", "Hodscha", "Hodža", "Hoca", "Hocca","Hooka", "Hogea", "Mullah", "Mulla", "Mula", "Molla", "Efendi", "Afandi", "Ependi" ( "’afandī"), "Hajji". In several cultures he is named by the title alone. In Arabic-speaking countries this character is known as "Juha", "Djoha", "Djuha", "Dschuha", "Chotzas", "Goha" ( "juḥā"). Juha was originally a separate folk character found in Arabic literature as early as the 9th century, and was widely popular by the 11th century. Lore of the two characters became amalgamated in the 19th century when collections were translated from Arabic into Turkish and Persian. In Sicily and Southern Italy he is known as "Giufà", derived from the Arabic character Juha. In the Swahili and Indonesian culture, many of his stories are being told under the name of "Abunuwasi" or "Abunawas", though this confuses Nasreddin with an entirely different man – the poet Abu Nuwas, known for homoerotic verse. In China, where stories of him are well known, he is known by the various transliterations from his Uyghur name, 阿凡提 (Āfántí) and 阿方提 (Āfāngtí). The Uyghurs believe that he was from Xinjiang, while the Uzbeks believe he was from Bukhara. Shanghai Animation Film Studio produced a 13-episode Nasreddin related animation called 'The Story of Afanti'/ 阿凡提 in 1979, which became one of the most influential animations in China's history. The musical Nasirdin Apandim features the legend of Nasreddin effendi ("sir, lord"), largely sourced from Uyghur folklore. In Central Asia, he is commonly known as "Afandi". The Central Asian peoples also claim his local origin, as do Uyghurs. The Nasreddin stories are known throughout the Middle East and have touched cultures around the world. Superficially, most of the Nasreddin stories may be told as jokes or humorous anecdotes. They are told and retold endlessly in the teahouses and caravanserais of Asia and can be heard in homes and on the radio. But it is inherent in a Nasreddin story that it may be understood at many levels. There is the joke, followed by a moral and usually the little extra which brings the consciousness of the potential mystic a little further on the way to realization. Nasreddin was the main character in a magazine, called simply "Molla Nasraddin", published in Azerbaijan and "read across the Muslim world from Morocco to Iran". The eight-page Azerbaijani satirical periodical was published in Tiflis (from 1906 to 1917), Tabriz (in 1921) and Baku (from 1922 to 1931) in the Azeri and occasionally Russian languages. Founded by Jalil Mammadguluzadeh, it depicted inequality, cultural assimilation, and corruption and ridiculed the backward lifestyles and values of clergy and religious fanatics. The magazine was frequently banned but had a lasting influence on Azerbaijani and Iranian literature. Some Nasreddin tales also appear in collections of Aesop's fables. "The miller, his son and the donkey" is one example. Others are "The Ass with a Burden of Salt" (Perry Index 180) and "The Satyr and the Traveller." In some Bulgarian folk tales that originated during the Ottoman period, the name appears as an antagonist to a local wise man, named "Sly Peter". In Sicily the same tales involve a man named "Giufà". In Sephardic culture, spread throughout the Ottoman Empire, a character that appears in many folk tales is named "Djohá". In Romanian, the existing stories come from an 1853 verse compilation edited by Anton Pann, a philologist and poet renowned for authoring the current Romanian anthem. Nasreddin is mostly known as a character from short tales; whole novels and stories have later been written and production began on a never-completed animated feature film. In Russia, Nasreddin is known mostly because of the Russian work "Возмутитель спокойствия" by Leonid Solovyov (English translations: "The Beggar in the Harem: Impudent Adventures in Old Bukhara", 1956, and "The Tale of Hodja Nasreddin: Disturber of the Peace", 2009). The composer Shostakovich celebrated Nasreddin, among other figures, in the second movement ("Yumor", "Humor") of his Symphony No. 13. The text, by Yevgeny Yevtushenko, portrays humor as a weapon against dictatorship and tyranny. Shostakovich's music shares many of the "foolish yet profound" qualities of Nasreddin's sayings listed above. The Graeco-Armenian mystic G. I. Gurdjieff often referred to "our own dear Mullah Nasr Eddin", also calling him an "incomparable teacher", particularly in his book "Beelzebub's Tales". Sufi philosopher Idries Shah published several collections of Nasruddin stories in English, and emphasized their teaching value. He is known as "Mullah Nasruddin" in South Asian children's books. A TV serial on him was aired in India as "Mulla Nasiruddin" and was widely watched in India and Pakistan. For Uzbek people, Nasreddin is one of their own; he is said to have lived and been born in Bukhara. In gatherings, family meetings, and parties they tell each other stories about him that are called "latifa" of "afandi". There are at least two collections of stories related to Nasriddin Afandi. Books on him: In 1943, the Soviet film "Nasreddin in Bukhara" was directed by Yakov Protazanov based on Solovyov's book, followed in 1947 by a film called "The Adventures of Nasreddin", directed by Nabi Ganiyev and also set in the Uzbekistan SSR.
https://en.wikipedia.org/wiki?curid=21269
Neutron The neutron is a subatomic particle, symbol or , with no electric charge and a mass slightly greater than that of a proton. Protons and neutrons constitute the nuclei of atoms. Since protons and neutrons behave similarly within the nucleus, and each has a mass of approximately one atomic mass unit, they are both referred to as nucleons. Their properties and interactions are described by nuclear physics. The chemical properties of an atom are mostly determined by the configuration of electrons that orbit the atom's heavy nucleus. The electron configuration is determined by the charge of the nucleus, set by the number of protons, or atomic number. Neutrons do not affect the electron configuration, but the sum of atomic number and the number of neutrons, or neutron number, is the mass of the nucleus. Atoms of a chemical element that differ only in neutron number are called isotopes. For example, carbon, with atomic number 6, has an abundant isotope carbon-12 with 6 neutrons and a rare isotope carbon-13 with 7 neutrons. Some elements occur in nature with only one stable isotope, such as fluorine. Other elements occur with many stable isotopes, such as tin with ten stable isotopes. The properties of an atomic nucleus are dependent on both atomic and neutron numbers. With their positive charge, the protons within the nucleus are repelled by the long-range electromagnetic force, but the much stronger, but short-range, nuclear force binds the nucleons closely together. Neutrons are required for the stability of nuclei, with the exception of the single-proton hydrogen nucleus. Neutrons are produced copiously in nuclear fission and fusion. They are a primary contributor to the nucleosynthesis of chemical elements within stars through fission, fusion, and neutron capture processes. The neutron is essential to the production of nuclear power. In the decade after the neutron was discovered by James Chadwick in 1932, neutrons were used to induce many different types of nuclear transmutations. With the discovery of nuclear fission in 1938, it was quickly realized that, if a fission event produced neutrons, each of these neutrons might cause further fission events, in a cascade known as a nuclear chain reaction. These events and findings led to the first self-sustaining nuclear reactor (Chicago Pile-1, 1942) and the first nuclear weapon (Trinity, 1945). Free neutrons, while not directly ionizing atoms, cause ionizing radiation. As such they can be a biological hazard, depending upon dose. A small natural "neutron background" flux of free neutrons exists on Earth, caused by cosmic ray showers, and by the natural radioactivity of spontaneously fissionable elements in the Earth's crust. Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. An atomic nucleus is formed by a number of protons, "Z" (the atomic number), and a number of neutrons, "N" (the neutron number), bound together by the nuclear force. The atomic number determines the chemical properties of the atom, and the neutron number determines the isotope or nuclide. The terms isotope and nuclide are often used synonymously, but they refer to chemical and nuclear properties, respectively. Isotopes are nuclides with the same atomic number, but different neutron number. Nuclides with the same neutron number, but different atomic number, are called isotones. The atomic mass number, "A", is equal to the sum of atomic and neutron numbers. Nuclides with the same atomic mass number, but different atomic and neutron numbers, are called isobars. The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol 1H) is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium (D or 2H) and tritium (T or 3H) contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons. The most common nuclide of the common chemical element lead, 208Pb, has 82 protons and 126 neutrons, for example. The table of nuclides comprises all the known nuclides. Even though it is not a chemical element, the neutron is included in this table. The free neutron has a mass of 939,565,413.3 eV/c2, or , or . The neutron has a mean square radius of about , or 0.8 fm, and it is a spin-½ fermion. The neutron has no measurable electric charge. With its positive electric charge, the proton is directly influenced by electric fields, whereas the neutron is unaffected by electric fields. The neutron has a magnetic moment, however, so the neutron is influenced by magnetic fields. The neutron's magnetic moment has a negative value, because its orientation is opposite to the neutron's spin. A free neutron is unstable, decaying to a proton, electron and antineutrino with a mean lifetime of just under 15 minutes (). This radioactive decay, known as beta decay, is possible because the mass of the neutron is slightly greater than the proton. The free proton is stable. Neutrons or protons bound in a nucleus can be stable or unstable, however, depending on the nuclide. Beta decay, in which neutrons decay to protons, or vice versa, is governed by the weak force, and it requires the emission or absorption of electrons and neutrinos, or their antiparticles. Protons and neutrons behave almost identically under the influence of the nuclear force within the nucleus. The concept of isospin, in which the proton and neutron are viewed as two quantum states of the same particle, is used to model the interactions of nucleons by the nuclear or weak forces. Because of the strength of the nuclear force at short distances, the binding energy of nucleons is more than seven orders of magnitude larger than the electromagnetic energy binding electrons in atoms. Nuclear reactions (such as nuclear fission) therefore have an energy density that is more than ten million times that of chemical reactions. Because of the mass–energy equivalence, nuclear binding energies reduce the mass of nuclei. Ultimately, the ability of the nuclear force to store energy arising from the electromagnetic repulsion of nuclear components is the basis for most of the energy that makes nuclear reactors or bombs possible. In nuclear fission, the absorption of a neutron by a heavy nuclide (e.g., uranium-235) causes the nuclide to become unstable and break into light nuclides and additional neutrons. The positively charged light nuclides then repel, releasing electromagnetic potential energy. The neutron is classified as a "hadron", because it is a composite particle made of quarks. The neutron is also classified as a "baryon", because it is composed of three valence quarks. The finite size of the neutron and its magnetic moment both indicate that the neutron is a composite, rather than elementary, particle. A neutron contains two down quarks with charge − "e" and one up quark with charge + "e". Like protons, the quarks of the neutron are held together by the strong force, mediated by gluons. The nuclear force results from secondary effects of the more fundamental strong force. The story of the discovery of the neutron and its properties is central to the extraordinary developments in atomic physics that occurred in the first half of the 20th century, leading ultimately to the atomic bomb in 1945. In the 1911 Rutherford model, the atom consisted of a small positively charged massive nucleus surrounded by a much larger cloud of negatively charged electrons. In 1920, Rutherford suggested that the nucleus consisted of positive protons and neutrally-charged particles, suggested to be a proton and an electron bound in some way. Electrons were assumed to reside within the nucleus because it was known that beta radiation consisted of electrons emitted from the nucleus. Rutherford called these uncharged particles "neutrons", by the Latin root for "neutralis" (neuter) and the Greek suffix "-on" (a suffix used in the names of subatomic particles, i.e. "electron" and "proton"). References to the word "neutron" in connection with the atom can be found in the literature as early as 1899, however. Throughout the 1920s, physicists assumed that the atomic nucleus was composed of protons and "nuclear electrons" but there were obvious problems. It was difficult to reconcile the proton–electron model for nuclei with the Heisenberg uncertainty relation of quantum mechanics. The Klein paradox, discovered by Oskar Klein in 1928, presented further quantum mechanical objections to the notion of an electron confined within a nucleus. Observed properties of atoms and molecules were inconsistent with the nuclear spin expected from the proton–electron hypothesis. Both protons and electrons carry an intrinsic spin of ½ "ħ". Isotopes of the same species (i.e. having the same number of protons) can have both integer or fractional spin, i.e. the neutron spin must be also fractional (½ "ħ"). However, there is no way to arrange the spins of an electron and a proton (supposed to bond to form a neutron) to get the fractional spin of a neutron. In 1931, Walther Bothe and Herbert Becker found that if alpha particle radiation from polonium fell on beryllium, boron, or lithium, an unusually penetrating radiation was produced. The radiation was not influenced by an electric field, so Bothe and Becker assumed it was gamma radiation. The following year Irène Joliot-Curie and Frédéric Joliot-Curie in Paris showed that if this "gamma" radiation fell on paraffin, or any other hydrogen-containing compound, it ejected protons of very high energy. Neither Rutherford nor James Chadwick at the Cavendish Laboratory in Cambridge were convinced by the gamma ray interpretation. Chadwick quickly performed a series of experiments that showed that the new radiation consisted of uncharged particles with about the same mass as the proton. These particles were neutrons. Chadwick won the 1935 Nobel Prize in Physics for this discovery. Models for atomic nucleus consisting of protons and neutrons were quickly developed by Werner Heisenberg and others. The proton–neutron model explained the puzzle of nuclear spins. The origins of beta radiation were explained by Enrico Fermi in 1934 by the process of beta decay, in which the neutron decays to a proton by "creating" an electron and a (as yet undiscovered) neutrino. In 1935, Chadwick and his doctoral student Maurice Goldhaber reported the first accurate measurement of the mass of the neutron. By 1934, Fermi had bombarded heavier elements with neutrons to induce radioactivity in elements of high atomic number. In 1938, Fermi received the Nobel Prize in Physics ""for his demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons"". In 1938 Otto Hahn, Lise Meitner, and Fritz Strassmann discovered nuclear fission, or the fractionation of uranium nuclei into light elements, induced by neutron bombardment. In 1945 Hahn received the 1944 Nobel Prize in Chemistry ""for his discovery of the fission of heavy atomic nuclei."" The discovery of nuclear fission would lead to the development of nuclear power and the atomic bomb by the end of World War II. Since interacting protons have a mutual electromagnetic repulsion that is stronger than their attractive nuclear interaction, neutrons are a necessary constituent of any atomic nucleus that contains more than one proton (see diproton and neutron–proton ratio). Neutrons bind with protons and one another in the nucleus via the nuclear force, effectively moderating the repulsive forces between the protons and stabilizing the nucleus. The neutrons and protons bound in a nucleus form a quantum mechanical system wherein each nucleon is bound in a particular, hierarchical quantum state. Protons can decay to neutrons, or vice-versa, within the nucleus. This process, called beta decay, requires the emission of an electron or positron and an associated neutrino. These emitted particles carry away the energy excess as a nucleon falls from one quantum state to a lower energy state, while the proton (or neutron) changes to a neutron (or proton). Such decay processes can occur only if allowed by basic energy conservation and quantum mechanical constraints. The stability of nuclei depends on these constraints. Outside the nucleus, free neutrons are unstable and have a mean lifetime of (about 14 minutes, 40 seconds); therefore the half-life for this process (which differs from the mean lifetime by a factor of ) is (about 10 minutes, 10 seconds). This decay is only possible because the mass of the proton is less than that of the neutron. By the mass-energy equivalence, when a neutron decays to a proton this way it attains a lower energy state. Beta decay of the neutron, described above, can be denoted by the radioactive decay: where , , and denote the proton, electron and electron antineutrino, respectively. For the free neutron the decay energy for this process (based on the masses of the neutron, proton, and electron) is 0.782343 MeV. The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at 0.782 ± 0.013 MeV. The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic energy) as well as neutrino mass is constrained by many other methods. A small fraction (about one in 1000) of free neutrons decay with the same products, but add an extra particle in the form of an emitted gamma ray: This gamma ray may be thought of as an "internal bremsstrahlung" that arises from the electromagnetic interaction of the emitted beta particle with the proton. Internal bremsstrahlung gamma ray production is also a minor feature of beta decays of bound neutrons (as discussed below). A very small minority of neutron decays (about four per million) are so-called "two-body (neutron) decays", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the "two bodies"). In this type of free neutron decay, almost all of the neutron decay energy is carried off by the antineutrino (the other "body"). (The hydrogen atom recoils with a speed of only about (decay energy)/(hydrogen rest energy) times the speed of light, or 250 km/s.) The transformation of a free proton to a neutron (plus a positron and a neutrino) is energetically impossible, since a free neutron has a greater mass than a free proton. But a high-energy collision of a proton and an electron or neutrino can result in a neutron. While a free neutron has a half life of about 10.2 min, most neutrons within nuclei are stable. According to the nuclear shell model, the protons and neutrons of a nuclide are a quantum mechanical system organized into discrete energy levels with unique quantum numbers. For a neutron to decay, the resulting proton requires an available state at lower energy than the initial neutron state. In stable nuclei the possible lower energy states are all filled, meaning they are each occupied by two protons with spin up and spin down. The Pauli exclusion principle therefore disallows the decay of a neutron to a proton within stable nuclei. The situation is similar to electrons of an atom, where electrons have distinct atomic orbitals and are prevented from decaying to lower energy states, with the emission of a photon, by the exclusion principle. Neutrons in unstable nuclei can decay by beta decay as described above. In this case, an energetically allowed quantum state is available for the proton resulting from the decay. One example of this decay is carbon-14 (6 protons, 8 neutrons) that decays to nitrogen-14 (7 protons, 7 neutrons) with a half-life of about 5,730 years. Inside a nucleus, a proton can transform into a neutron via inverse beta decay, if an energetically allowed quantum state is available for the neutron. This transformation occurs by emission of a positron and an electron neutrino: The transformation of a proton to a neutron inside of a nucleus is also possible through electron capture: Positron capture by neutrons in nuclei that contain an excess of neutrons is also possible, but is hindered because positrons are repelled by the positive nucleus, and quickly annihilate when they encounter electrons. Three types of beta decay in competition are illustrated by the single isotope copper-64 (29 protons, 35 neutrons), which has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide is almost equally likely to undergo proton decay (by positron emission, 18% or by electron capture, 43%) or neutron decay (by electron emission, 39%). Within the theoretical framework of Standard Model for particle physics, the neutron is composed of two down quarks and an up quark. The only possible decay mode for the neutron that conserves baryon number is for one of the neutron's quarks to change flavour via the weak interaction. The decay of one of the neutron's down quarks into a lighter up quark can be achieved by the emission of a W boson. By this process, the Standard Model description of beta decay, the neutron decays into a proton (which contains one down and two up quarks), an electron, and an electron antineutrino. The decay of the proton to a neutron occurs similarly through the electroweak force. The decay of one of the proton's up quarks into a down quark can be achieved by the emission of a W boson. The proton decays into a neutron, a positron, and an electron neutrino. This reaction can only occur within an atomic nucleus which has a quantum state at lower energy available for the created neutron. The mass of a neutron cannot be directly determined by mass spectrometry due to lack of electric charge. However, since the masses of a proton and of a deuteron can be measured with a mass spectrometer, the mass of a neutron can be deduced by subtracting proton mass from deuteron mass, with the difference being the mass of the neutron plus the binding energy of deuterium (expressed as a positive emitted energy). The latter can be directly measured by measuring the energy (formula_1) of the single gamma photon emitted when neutrons are captured by protons (this is exothermic and happens with zero-energy neutrons), plus the small recoil kinetic energy (formula_2) of the deuteron (about 0.06% of the total energy). The energy of the gamma ray can be measured to high precision by X-ray diffraction techniques, as was first done by Bell and Elliot in 1948. The best modern (1986) values for neutron mass by this technique are provided by Greene, et al. These give a neutron mass of: The value for the neutron mass in MeV is less accurately known, due to less accuracy in the known conversion of u to MeV: Another method to determine the mass of a neutron starts from the beta decay of the neutron, when the momenta of the resulting proton and electron are measured. The total electric charge of the neutron is . This zero value has been tested experimentally, and the present experimental limit for the charge of the neutron is , or . This value is consistent with zero, given the experimental uncertainties (indicated in parentheses). By comparison, the charge of the proton is . Even though the neutron is a neutral particle, the magnetic moment of a neutron is not zero. The neutron is not affected by electric fields, but it is affected by magnetic fields. The magnetic moment of the neutron is an indication of its quark substructure and internal charge distribution. The value for the neutron's magnetic moment was first directly measured by Luis Alvarez and Felix Bloch at Berkeley, California, in 1940. Alvarez and Bloch determined the magnetic moment of the neutron to be , where "μ"N is the nuclear magneton. In the quark model for hadrons, the neutron is composed of one up quark (charge +2/3 "e") and two down quarks (charge −1/3 "e"). The magnetic moment of the neutron can be modeled as a sum of the magnetic moments of the constituent quarks. The calculation assumes that the quarks behave like pointlike Dirac particles, each having their own magnetic moment. Simplistically, the magnetic moment of the neutron can be viewed as resulting from the vector sum of the three quark magnetic moments, plus the orbital magnetic moments caused by the movement of the three charged quarks within the neutron. In one of the early successes of the Standard Model (SU(6) theory, now understood in terms of quark behavior), in 1964 Mirza A.B. Beg, Benjamin W. Lee, and Abraham Pais theoretically calculated the ratio of proton to neutron magnetic moments to be −3/2, which agrees with the experimental value to within 3%. The measured value for this ratio is . A contradiction of the quantum mechanical basis of this calculation with the Pauli exclusion principle, led to the discovery of the color charge for quarks by Oscar W. Greenberg in 1964. The above treatment compares neutrons with protons, allowing the complex behavior of quarks to be subtracted out between models, and merely exploring what the effects would be of differing quark charges (or quark type). Such calculations are enough to show that the interior of neutrons is very much like that of protons, save for the difference in quark composition with a down quark in the neutron replacing an up quark in the proton. The neutron magnetic moment can be roughly computed by assuming a simple nonrelativistic, quantum mechanical wavefunction for baryons composed of three quarks. A straightforward calculation gives fairly accurate estimates for the magnetic moments of neutrons, protons, and other baryons. For a neutron, the end result of this calculation is that the magnetic moment of the neutron is given by , where "μ"d and "μ"u are the magnetic moments for the down and up quarks, respectively. This result combines the intrinsic magnetic moments of the quarks with their orbital magnetic moments, and assumes the three quarks are in a particular, dominant quantum state. The results of this calculation are encouraging, but the masses of the up or down quarks were assumed to be 1/3 the mass of a nucleon. The masses of the quarks are actually only about 1% that of a nucleon. The discrepancy stems from the complexity of the Standard Model for nucleons, where most of their mass originates in the gluon fields, virtual particles, and their associated energy that are essential aspects of the strong force. Furthermore, the complex system of quarks and gluons that constitute a neutron requires a relativistic treatment. The nucleon magnetic moment has been successfully computed numerically from first principles, however, including all the effects mentioned and using more realistic values for the quark masses. The calculation gave results that were in fair agreement with measurement, but it required significant computing resources. The neutron is a spin 1/2 particle, that is, it is a fermion with intrinsic angular momentum equal to 1/2 , where is the reduced Planck constant. For many years after the discovery of the neutron, its exact spin was ambiguous. Although it was assumed to be a spin 1/2 Dirac particle, the possibility that the neutron was a spin 3/2 particle lingered. The interactions of the neutron's magnetic moment with an external magnetic field were exploited to finally determine the spin of the neutron. In 1949, Hughes and Burgy measured neutrons reflected from a ferromagnetic mirror and found that the angular distribution of the reflections was consistent with spin 1/2. In 1954, Sherwood, Stephenson, and Bernstein employed neutrons in a Stern–Gerlach experiment that used a magnetic field to separate the neutron spin states. They recorded two such spin states, consistent with a spin 1/2 particle. As a fermion, the neutron is subject to the Pauli exclusion principle; two neutrons cannot have the same quantum numbers. This is the source of the degeneracy pressure which makes neutron stars possible. An article published in 2007 featuring a model-independent analysis concluded that the neutron has a negatively charged exterior, a positively charged middle, and a negative core. In a simplified classical view, the negative "skin" of the neutron assists it to be attracted to the protons with which it interacts in the nucleus. (However, the main attraction between neutrons and protons is via the nuclear force, which does not involve electric charge.) The simplified classical view of the neutron's charge distribution also "explains" the fact that the neutron magnetic dipole points in the opposite direction from its spin angular momentum vector (as compared to the proton). This gives the neutron, in effect, a magnetic moment which resembles a negatively charged particle. This can be reconciled classically with a neutral neutron composed of a charge distribution in which the negative sub-parts of the neutron have a larger average radius of distribution, and therefore contribute more to the particle's magnetic dipole moment, than do the positive parts that are, on average, nearer the core. The Standard Model of particle physics predicts a tiny separation of positive and negative charge within the neutron leading to a permanent electric dipole moment. The predicted value is, however, well below the current sensitivity of experiments. From several unsolved puzzles in particle physics, it is clear that the Standard Model is not the final and full description of all particles and their interactions. New theories going beyond the Standard Model generally lead to much larger predictions for the electric dipole moment of the neutron. Currently, there are at least four experiments trying to measure for the first time a finite neutron electric dipole moment, including: The antineutron is the antiparticle of the neutron. It was discovered by Bruce Cork in 1956, a year after the antiproton was discovered. CPT-symmetry puts strong constraints on the relative properties of particles and antiparticles, so studying antineutrons provides stringent tests on CPT-symmetry. The fractional difference in the masses of the neutron and antineutron is . Since the difference is only about two standard deviations away from zero, this does not give any convincing evidence of CPT-violation. The existence of stable clusters of 4 neutrons, or tetraneutrons, has been hypothesised by a team led by Francisco-Miguel Marqués at the CNRS Laboratory for Nuclear Physics based on observations of the disintegration of beryllium-14 nuclei. This is particularly interesting because current theory suggests that these clusters should not be stable. In February 2016, Japanese physicist Susumu Shimoura of the University of Tokyo and co-workers reported they had observed the purported tetraneutrons for the first time experimentally. Nuclear physicists around the world say this discovery, if confirmed, would be a milestone in the field of nuclear physics and certainly would deepen our understanding of the nuclear forces. The dineutron is another hypothetical particle. In 2012, Artemis Spyrou from Michigan State University and coworkers reported that they observed, for the first time, the dineutron emission in the decay of 16Be. The dineutron character is evidenced by a small emission angle between the two neutrons. The authors measured the two-neutron separation energy to be 1.35(10) MeV, in good agreement with shell model calculations, using standard interactions for this mass region. At extremely high pressures and temperatures, nucleons and electrons are believed to collapse into bulk neutronic matter, called neutronium. This is presumed to happen in neutron stars. The extreme pressure inside a neutron star may deform the neutrons into a cubic symmetry, allowing tighter packing of neutrons. The common means of detecting a charged particle by looking for a track of ionization (such as in a cloud chamber) does not work for neutrons directly. Neutrons that elastically scatter off atoms can create an ionization track that is detectable, but the experiments are not as simple to carry out; other means for detecting neutrons, consisting of allowing them to interact with atomic nuclei, are more commonly used. The commonly used methods to detect neutrons can therefore be categorized according to the nuclear processes relied upon, mainly neutron capture or elastic scattering. A common method for detecting neutrons involves converting the energy released from neutron capture reactions into electrical signals. Certain nuclides have a high neutron capture cross section, which is the probability of absorbing a neutron. Upon neutron capture, the compound nucleus emits more easily detectable radiation, for example an alpha particle, which is then detected. The nuclides , , , , , , and are useful for this purpose. Neutrons can elastically scatter off nuclei, causing the struck nucleus to recoil. Kinematically, a neutron can transfer more energy to a light nucleus such as hydrogen or helium than to a heavier nucleus. Detectors relying on elastic scattering are called fast neutron detectors. Recoiling nuclei can ionize and excite further atoms through collisions. Charge and/or scintillation light produced in this way can be collected to produce a detected signal. A major challenge in fast neutron detection is discerning such signals from erroneous signals produced by gamma radiation in the same detector. Methods such as pulse shape discrimination can be used in distinguishing neutron signals from gamma-ray signals, although certain inorganic scintillator-based detectors have been developed to selectively detect neutrons in mixed radiation fields inherently without any additional techniques. Fast neutron detectors have the advantage of not requiring a moderator, and are therefore capable of measuring the neutron's energy, time of arrival, and in certain cases direction of incidence. Free neutrons are unstable, although they have the longest half-life of any unstable subatomic particle by several orders of magnitude. Their half-life is still only about 10 minutes, however, so they can be obtained only from sources that produce them continuously. Natural neutron background. A small natural background flux of free neutrons exists everywhere on Earth. In the atmosphere and deep into the ocean, the "neutron background" is caused by muons produced by cosmic ray interaction with the atmosphere. These high-energy muons are capable of penetration to considerable depths in water and soil. There, in striking atomic nuclei, among other reactions they induce spallation reactions in which a neutron is liberated from the nucleus. Within the Earth's crust a second source is neutrons produced primarily by spontaneous fission of uranium and thorium present in crustal minerals. The neutron background is not strong enough to be a biological hazard, but it is of importance to very high resolution particle detectors that are looking for very rare events, such as (hypothesized) interactions that might be caused by particles of dark matter. Recent research has shown that even thunderstorms can produce neutrons with energies of up to several tens of MeV. Recent research has shown that the fluence of these neutrons lies between 10−9 and 10−13 per ms and per m2 depending on the detection altitude. The energy of most of these neutrons, even with initial energies of 20 MeV, decreases down to the keV range within 1 ms. Even stronger neutron background radiation is produced at the surface of Mars, where the atmosphere is thick enough to generate neutrons from cosmic ray muon production and neutron-spallation, but not thick enough to provide significant protection from the neutrons produced. These neutrons not only produce a Martian surface neutron radiation hazard from direct downward-going neutron radiation but may also produce a significant hazard from reflection of neutrons from the Martian surface, which will produce reflected neutron radiation penetrating upward into a Martian craft or habitat from the floor. Sources of neutrons for research. These include certain types of radioactive decay (spontaneous fission and neutron emission), and from certain nuclear reactions. Convenient nuclear reactions include tabletop reactions such as natural alpha and gamma bombardment of certain nuclides, often beryllium or deuterium, and induced nuclear fission, such as occurs in nuclear reactors. In addition, high-energy nuclear reactions (such as occur in cosmic radiation showers or accelerator collisions) also produce neutrons from disintegration of target nuclei. Small (tabletop) particle accelerators optimized to produce free neutrons in this way, are called neutron generators. In practice, the most commonly used small laboratory sources of neutrons use radioactive decay to power neutron production. One noted neutron-producing radioisotope, californium-252 decays (half-life 2.65 years) by spontaneous fission 3% of the time with production of 3.7 neutrons per fission, and is used alone as a neutron source from this process. Nuclear reaction sources (that involve two materials) powered by radioisotopes use an alpha decay source plus a beryllium target, or else a source of high-energy gamma radiation from a source that undergoes beta decay followed by gamma decay, which produces photoneutrons on interaction of the high-energy gamma ray with ordinary stable beryllium, or else with the deuterium in heavy water. A popular source of the latter type is radioactive antimony-124 plus beryllium, a system with a half-life of 60.9 days, which can be constructed from natural antimony (which is 42.8% stable antimony-123) by activating it with neutrons in a nuclear reactor, then transported to where the neutron source is needed. Nuclear fission reactors naturally produce free neutrons; their role is to sustain the energy-producing chain reaction. The intense neutron radiation can also be used to produce various radioisotopes through the process of neutron activation, which is a type of neutron capture. Experimental nuclear fusion reactors produce free neutrons as a waste product. However, it is these neutrons that possess most of the energy, and converting that energy to a useful form has proved a difficult engineering challenge. Fusion reactors that generate neutrons are likely to create radioactive waste, but the waste is composed of neutron-activated lighter isotopes, which have relatively short (50–100 years) decay periods as compared to typical half-lives of 10,000 years for fission waste, which is long due primarily to the long half-life of alpha-emitting transuranic actinides. Free neutron beams are obtained from neutron sources by neutron transport. For access to intense neutron sources, researchers must go to a specialized neutron facility that operates a research reactor or a spallation source. The neutron's lack of total electric charge makes it difficult to steer or accelerate them. Charged particles can be accelerated, decelerated, or deflected by electric or magnetic fields. These methods have little effect on neutrons. However, some effects may be attained by use of inhomogeneous magnetic fields because of the neutron's magnetic moment. Neutrons can be controlled by methods that include moderation, reflection, and velocity selection. Thermal neutrons can be polarized by transmission through magnetic materials in a method analogous to the Faraday effect for photons. Cold neutrons of wavelengths of 6–7 angstroms can be produced in beams of a high degree of polarization, by use of magnetic mirrors and magnetized interference filters. The neutron plays an important role in many nuclear reactions. For example, neutron capture often results in neutron activation, inducing radioactivity. In particular, knowledge of neutrons and their behavior has been important in the development of nuclear reactors and nuclear weapons. The fissioning of elements like uranium-235 and plutonium-239 is caused by their absorption of neutrons. "Cold", "thermal", and "hot" neutron radiation is commonly employed in neutron scattering facilities, where the radiation is used in a similar way one uses X-rays for the analysis of condensed matter. Neutrons are complementary to the latter in terms of atomic contrasts by different scattering cross sections; sensitivity to magnetism; energy range for inelastic neutron spectroscopy; and deep penetration into matter. The development of "neutron lenses" based on total internal reflection within hollow glass capillary tubes or by reflection from dimpled aluminum plates has driven ongoing research into neutron microscopy and neutron/gamma ray tomography. A major use of neutrons is to excite delayed and prompt gamma rays from elements in materials. This forms the basis of neutron activation analysis (NAA) and prompt gamma neutron activation analysis (PGNAA). NAA is most often used to analyze small samples of materials in a nuclear reactor whilst PGNAA is most often used to analyze subterranean rocks around bore holes and industrial bulk materials on conveyor belts. Another use of neutron emitters is the detection of light nuclei, in particular the hydrogen found in water molecules. When a fast neutron collides with a light nucleus, it loses a large fraction of its energy. By measuring the rate at which slow neutrons return to the probe after reflecting off of hydrogen nuclei, a neutron probe may determine the water content in soil. Because neutron radiation is both penetrating and ionizing, it can be exploited for medical treatments. Neutron radiation can have the unfortunate side-effect of leaving the affected area radioactive, however. Neutron tomography is therefore not a viable medical application. Fast neutron therapy utilizes high-energy neutrons typically greater than 20 MeV to treat cancer. Radiation therapy of cancers is based upon the biological response of cells to ionizing radiation. If radiation is delivered in small sessions to damage cancerous areas, normal tissue will have time to repair itself, while tumor cells often cannot. Neutron radiation can deliver energy to a cancerous region at a rate an order of magnitude larger than gamma radiation. Beams of low-energy neutrons are used in boron capture therapy to treat cancer. In boron capture therapy, the patient is given a drug that contains boron and that preferentially accumulates in the tumor to be targeted. The tumor is then bombarded with very low-energy neutrons (although often higher than thermal energy) which are captured by the boron-10 isotope in the boron, which produces an excited state of boron-11 that then decays to produce lithium-7 and an alpha particle that have sufficient energy to kill the malignant cell, but insufficient range to damage nearby cells. For such a therapy to be applied to the treatment of cancer, a neutron source having an intensity of the order of a thousand million (109) neutrons per second per cm2 is preferred. Such fluxes require a research nuclear reactor. Exposure to free neutrons can be hazardous, since the interaction of neutrons with molecules in the body can cause disruption to molecules and atoms, and can also cause reactions that give rise to other forms of radiation (such as protons). The normal precautions of radiation protection apply: Avoid exposure, stay as far from the source as possible, and keep exposure time to a minimum. Some particular thought must be given to how to protect from neutron exposure, however. For other types of radiation, e.g., alpha particles, beta particles, or gamma rays, material of a high atomic number and with high density makes for good shielding; frequently, lead is used. However, this approach will not work with neutrons, since the absorption of neutrons does not increase straightforwardly with atomic number, as it does with alpha, beta, and gamma radiation. Instead one needs to look at the particular interactions neutrons have with matter (see the section on detection above). For example, hydrogen-rich materials are often used to shield against neutrons, since ordinary hydrogen both scatters and slows neutrons. This often means that simple concrete blocks or even paraffin-loaded plastic blocks afford better protection from neutrons than do far more dense materials. After slowing, neutrons may then be absorbed with an isotope that has high affinity for slow neutrons without causing secondary capture radiation, such as lithium-6. Hydrogen-rich ordinary water affects neutron absorption in nuclear fission reactors: Usually, neutrons are so strongly absorbed by normal water that fuel enrichment with fissionable isotope is required. The deuterium in heavy water has a very much lower absorption affinity for neutrons than does protium (normal light hydrogen). Deuterium is, therefore, used in CANDU-type reactors, in order to slow (moderate) neutron velocity, to increase the probability of nuclear fission compared to neutron capture. "Thermal neutrons" are free neutrons whose energies have a Maxwell–Boltzmann distribution with kT =  () at room temperature. This gives characteristic (not average, or median) speed of 2.2 km/s. The name 'thermal' comes from their energy being that of the room temperature gas or material they are permeating. (see "kinetic theory" for energies and speeds of molecules). After a number of collisions (often in the range of 10–20) with nuclei, neutrons arrive at this energy level, provided that they are not absorbed. In many substances, thermal neutron reactions show a much larger effective cross-section than reactions involving faster neutrons, and thermal neutrons can therefore be absorbed more readily (i.e., with higher probability) by any atomic nuclei that they collide with, creating a heavier – and often unstable – isotope of the chemical element as a result. Most fission reactors use a neutron moderator to slow down, or "thermalize" the neutrons that are emitted by nuclear fission so that they are more easily captured, causing further fission. Others, called fast breeder reactors, use fission energy neutrons directly. "Cold neutrons" are thermal neutrons that have been equilibrated in a very cold substance such as liquid deuterium. Such a "cold source" is placed in the moderator of a research reactor or spallation source. Cold neutrons are particularly valuable for neutron scattering experiments. Ultracold neutrons are produced by inelastic scattering of cold neutrons in substances with a low neutron absorption cross section at a temperature of a few kelvins, such as solid deuterium or superfluid helium. An alternative production method is the mechanical deceleration of cold neutrons exploiting the Doppler shift. A "fast neutron" is a free neutron with a kinetic energy level close to (), hence a speed of ~ (~5% of the speed of light). They are named "fission energy" or "fast" neutrons to distinguish them from lower-energy thermal neutrons, and high-energy neutrons produced in cosmic showers or accelerators. Fast neutrons are produced by nuclear processes such as nuclear fission. Neutrons produced in fission, as noted above, have a Maxwell–Boltzmann distribution of kinetic energies from 0 to ~14 MeV, a mean energy of 2 MeV (for 235U fission neutrons), and a mode of only 0.75 MeV, which means that more than half of them do not qualify as fast (and thus have almost no chance of initiating fission in fertile materials, such as 238U and 232Th). Fast neutrons can be made into thermal neutrons via a process called moderation. This is done with a neutron moderator. In reactors, typically heavy water, light water, or graphite are used to moderate neutrons. D–T (deuterium–tritium) fusion is the fusion reaction that produces the most energetic neutrons, with 14.1 MeV of kinetic energy and traveling at 17% of the speed of light. D–T fusion is also the easiest fusion reaction to ignite, reaching near-peak rates even when the deuterium and tritium nuclei have only a thousandth as much kinetic energy as the 14.1 MeV that will be produced. 14.1 MeV neutrons have about 10 times as much energy as fission neutrons, and are very effective at fissioning even non-fissile heavy nuclei, and these high-energy fissions produce more neutrons on average than fissions by lower-energy neutrons. This makes D–T fusion neutron sources such as proposed tokamak power reactors useful for transmutation of transuranic waste. 14.1 MeV neutrons can also produce neutrons by knocking them loose from nuclei. On the other hand, these very high-energy neutrons are less likely to simply be captured without causing fission or spallation. For these reasons, nuclear weapon design extensively utilizes D–T fusion 14.1 MeV neutrons to cause more fission. Fusion neutrons are able to cause fission in ordinarily non-fissile materials, such as depleted uranium (uranium-238), and these materials have been used in the jackets of thermonuclear weapons. Fusion neutrons also can cause fission in substances that are unsuitable or difficult to make into primary fission bombs, such as reactor grade plutonium. This physical fact thus causes ordinary non-weapons grade materials to become of concern in certain nuclear proliferation discussions and treaties. Other fusion reactions produce much less energetic neutrons. D–D fusion produces a 2.45 MeV neutron and helium-3 half of the time, and produces tritium and a proton but no neutron the rest of the time. D–3He fusion produces no neutron. A fission energy neutron that has slowed down but not yet reached thermal energies is called an epithermal neutron. Cross sections for both capture and fission reactions often have multiple resonance peaks at specific energies in the epithermal energy range. These are of less significance in a fast neutron reactor, where most neutrons are absorbed before slowing down to this range, or in a well-moderated thermal reactor, where epithermal neutrons interact mostly with moderator nuclei, not with either fissile or fertile actinide nuclides. However, in a partially moderated reactor with more interactions of epithermal neutrons with heavy metal nuclei, there are greater possibilities for transient changes in reactivity that might make reactor control more difficult. Ratios of capture reactions to fission reactions are also worse (more captures without fission) in most nuclear fuels such as plutonium-239, making epithermal-spectrum reactors using these fuels less desirable, as captures not only waste the one neutron captured but also usually result in a nuclide that is not fissile with thermal or epithermal neutrons, though still fissionable with fast neutrons. The exception is uranium-233 of the thorium cycle, which has good capture-fission ratios at all neutron energies. High-energy neutrons have much more energy than fission energy neutrons and are generated as secondary particles by particle accelerators or in the atmosphere from cosmic rays. These high-energy neutrons are extremely efficient at ionization and far more likely to cause cell death than X-rays or protons.
https://en.wikipedia.org/wiki?curid=21272
Neon Neon is a chemical element with the symbol Ne and atomic number 10. It is a noble gas. Neon is a colorless, odorless, inert monatomic gas under standard conditions, with about two-thirds the density of air. It was discovered (along with krypton and xenon) in 1898 as one of the three residual rare inert elements remaining in dry air, after nitrogen, oxygen, argon and carbon dioxide were removed. Neon was the second of these three rare gases to be discovered and was immediately recognized as a new element from its bright red emission spectrum. The name neon is derived from the Greek word, , neuter singular form of ("neos"), meaning new. Neon is chemically inert, and no uncharged neon compounds are known. The compounds of neon currently known include ionic molecules, molecules held together by van der Waals forces and clathrates. During cosmic nucleogenesis of the elements, large amounts of neon are built up from the alpha-capture fusion process in stars. Although neon is a very common element in the universe and solar system (it is fifth in cosmic abundance after hydrogen, helium, oxygen and carbon), it is rare on Earth. It composes about 18.2 ppm of air by volume (this is about the same as the molecular or mole fraction) and a smaller fraction in Earth's crust. The reason for neon's relative scarcity on Earth and the inner (terrestrial) planets is that neon is highly volatile and forms no compounds to fix it to solids. As a result, it escaped from the planetesimals under the warmth of the newly ignited Sun in the early Solar System. Even the outer atmosphere of Jupiter is somewhat depleted of neon, although for a different reason. Neon gives a distinct reddish-orange glow when used in low-voltage neon glow lamps, high-voltage discharge tubes and neon advertising signs. The red emission line from neon also causes the well known red light of helium–neon lasers. Neon is used in some plasma tube and refrigerant applications but has few other commercial uses. It is commercially extracted by the fractional distillation of liquid air. Since air is the only source, it is considerably more expensive than helium. Neon was discovered in 1898 by the British chemists Sir William Ramsay (1852–1916) and Morris W. Travers (1872–1961) in London. Neon was discovered when Ramsay chilled a sample of air until it became a liquid, then warmed the liquid and captured the gases as they boiled off. The gases nitrogen, oxygen, and argon had been identified, but the remaining gases were isolated in roughly their order of abundance, in a six-week period beginning at the end of May 1898. First to be identified was krypton. The next, after krypton had been removed, was a gas which gave a brilliant red light under spectroscopic discharge. This gas, identified in June, was named "neon", the Greek analogue of the Latin "novum" ('new') suggested by Ramsay's son. The characteristic brilliant red-orange color emitted by gaseous neon when excited electrically was noted immediately. Travers later wrote: "the blaze of crimson light from the tube told its own story and was a sight to dwell upon and never forget." A second gas was also reported along with neon, having approximately the same density as argon but with a different spectrum – Ramsay and Travers named it "metargon". However, subsequent spectroscopic analysis revealed it to be argon contaminated with carbon monoxide. Finally, the same team discovered xenon by the same process, in September 1898. Neon's scarcity precluded its prompt application for lighting along the lines of Moore tubes, which used nitrogen and which were commercialized in the early 1900s. After 1902, Georges Claude's company Air Liquide produced industrial quantities of neon as a byproduct of his air-liquefaction business. In December 1910 Claude demonstrated modern neon lighting based on a sealed tube of neon. Claude tried briefly to sell neon tubes for indoor domestic lighting, due to their intensity, but the market failed because homeowners objected to the color. In 1912, Claude's associate began selling neon discharge tubes as eye-catching advertising signs and was instantly more successful. Neon tubes were introduced to the U.S. in 1923 with two large neon signs bought by a Los Angeles Packard car dealership. The glow and arresting red color made neon advertising completely different from the competition. The intense color and vibrancy of neon equated with American society at the time, suggesting a "century of progress" and transforming cities into sensational new environments filled with radiating advertisements and "electro-graphic architecture". Neon played a role in the basic understanding of the nature of atoms in 1913, when J. J. Thomson, as part of his exploration into the composition of canal rays, channeled streams of neon ions through a magnetic and an electric field and measured the deflection of the streams with a photographic plate. Thomson observed two separate patches of light on the photographic plate (see image), which suggested two different parabolas of deflection. Thomson eventually concluded that some of the atoms in the neon gas were of higher mass than the rest. Though not understood at the time by Thomson, this was the first discovery of isotopes of stable atoms. Thomson's device was a crude version of the instrument we now term a mass spectrometer. Neon is the second lightest inert gas. Neon has three stable isotopes: 20Ne (90.48%), 21Ne (0.27%) and 22Ne (9.25%). 21Ne and 22Ne are partly primordial and partly nucleogenic (i.e. made by nuclear reactions of other nuclides with neutrons or other particles in the environment) and their variations in natural abundance are well understood. In contrast, 20Ne (the chief primordial isotope made in stellar nucleosynthesis) is not known to be nucleogenic or radiogenic. The causes of the variation of 20Ne in the Earth have thus been hotly debated. The principal nuclear reactions generating nucleogenic neon isotopes start from 24Mg and 25Mg, which produce 21Ne and 22Ne respectively, after neutron capture and immediate emission of an alpha particle. The neutrons that produce the reactions are mostly produced by secondary spallation reactions from alpha particles, in turn derived from uranium-series decay chains. The net result yields a trend towards lower 20Ne/22Ne and higher 21Ne/22Ne ratios observed in uranium-rich rocks such as granites. 21Ne may also be produced in a nucleogenic reaction, when 20Ne absorbs a neutron from various natural terrestrial neutron sources. In addition, isotopic analysis of exposed terrestrial rocks has demonstrated the cosmogenic (cosmic ray) production of 21Ne. This isotope is generated by spallation reactions on magnesium, sodium, silicon, and aluminium. By analyzing all three isotopes, the cosmogenic component can be resolved from magmatic neon and nucleogenic neon. This suggests that neon will be a useful tool in determining cosmic exposure ages of surface rocks and meteorites. Similar to xenon, neon content observed in samples of volcanic gases is enriched in 20Ne and nucleogenic 21Ne relative to 22Ne content. The neon isotopic content of these mantle-derived samples represents a non-atmospheric source of neon. The 20Ne-enriched components are attributed to exotic primordial rare-gas components in the Earth, possibly representing solar neon. Elevated 20Ne abundances are found in diamonds, further suggesting a solar-neon reservoir in the Earth. Neon is the second-lightest noble gas, after helium. It glows reddish-orange in a vacuum discharge tube. Also, neon has the narrowest liquid range of any element: from 24.55 K to 27.05 K (−248.45 °C to −245.95 °C, or −415.21 °F to −410.71 °F). It has over 40 times the refrigerating capacity (per unit volume) of liquid helium and three times that of liquid hydrogen. In most applications it is a less expensive refrigerant than helium. Neon plasma has the most intense light discharge at normal voltages and currents of all the noble gases. The average color of this light to the human eye is red-orange due to many lines in this range; it also contains a strong green line, which is hidden, unless the visual components are dispersed by a spectroscope. Two quite different kinds of neon lighting are in common use. Neon glow lamps are generally tiny, with most operating between 100 and 250 volts. They have been widely used as power-on indicators and in circuit-testing equipment, but light-emitting diodes (LEDs) now dominate in those applications. These simple neon devices were the forerunners of plasma displays and plasma television screens. Neon signs typically operate at much higher voltages (2–15 kilovolts), and the luminous tubes are commonly meters long. The glass tubing is often formed into shapes and letters for signage, as well as architectural and artistic applications. Stable isotopes of neon are produced in stars. Neon's most abundant isotope 20Ne (90.48%) is created by the nuclear fusion of carbon and carbon in the carbon-burning process of stellar nucleosynthesis. This requires temperatures above 500 megakelvins, which occur in the cores of stars of more than 8 solar masses. Neon is abundant on a universal scale; it is the fifth most abundant chemical element in the universe by mass, after hydrogen, helium, oxygen, and carbon (see chemical element). Its relative rarity on Earth, like that of helium, is due to its relative lightness, high vapor pressure at very low temperatures, and chemical inertness, all properties which tend to keep it from being trapped in the condensing gas and dust clouds that formed the smaller and warmer solid planets like Earth. Neon is monatomic, making it lighter than the molecules of diatomic nitrogen and oxygen which form the bulk of Earth's atmosphere; a balloon filled with neon will rise in air, albeit more slowly than a helium balloon. Neon's abundance in the universe is about 1 part in 750; in the Sun and presumably in the proto-solar system nebula, about 1 part in 600. The Galileo spacecraft atmospheric entry probe found that even in the upper atmosphere of Jupiter, the abundance of neon is reduced (depleted) by about a factor of 10, to a level of 1 part in 6,000 by mass. This may indicate that even the ice-planetesimals which brought neon into Jupiter from the outer solar system, formed in a region which was too warm to retain the neon atmospheric component (abundances of heavier inert gases on Jupiter are several times that found in the Sun). Neon comprises 1 part in 55,000 in the Earth's atmosphere, or 18.2 ppm by volume (this is about the same as the molecule or mole fraction), or 1 part in 79,000 of air by mass. It comprises a smaller fraction in the crust. It is industrially produced by cryogenic fractional distillation of liquefied air. On 17 August 2015, based on studies with the Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft, NASA scientists reported the detection of neon in the exosphere of the moon. Neon is the first p-block noble gas, and the first element with a true octet of electrons. It is inert: as is the case with its lighter analogue, helium, no strongly bound neutral molecules containing neon have been identified. The ions [NeAr]+, [NeH]+, and [HeNe]+ have been observed from optical and mass spectrometric studies. Solid neon clathrate hydrate was produced from water ice and neon gas at pressures 0.35–0.48 GPa and temperatures about −30 °C. Ne atoms are not bonded to water and can freely move through this material. They can be extracted by placing the clathrate into a vacuum chamber for several days, yielding ice XVI, the least dense crystalline form of water. The familiar Pauling electronegativity scale relies upon chemical bond energies, but such values have obviously not been measured for inert helium and neon. The Allen electronegativity scale, which relies only upon (measurable) atomic energies, identifies neon as the most electronegative element, closely followed by fluorine and helium. Neon is often used in signs and produces an unmistakable bright reddish-orange light. Although tube lights with other colors are often called "neon", they use different noble gases or varied colors of fluorescent lighting. Neon is used in vacuum tubes, high-voltage indicators, lightning arresters, wavemeter tubes, television tubes, and helium–neon lasers. Liquefied neon is commercially used as a cryogenic refrigerant in applications not requiring the lower temperature range attainable with more extreme liquid-helium refrigeration. Neon, as liquid or gas, is relatively expensive – for small quantities, the price of liquid neon can be more than 55 times that of liquid helium. Driving neon's expense is the rarity of neon, which, unlike helium, can only be obtained from air. The triple point temperature of neon (24.5561 K) is a defining fixed point in the International Temperature Scale of 1990.
https://en.wikipedia.org/wiki?curid=21273
Nickel Nickel is a chemical element with the symbol Ni and atomic number 28. It is a silvery-white lustrous metal with a slight golden tinge. Nickel belongs to the transition metals and is hard and ductile. Pure nickel, powdered to maximize the reactive surface area, shows a significant chemical activity, but larger pieces are slow to react with air under standard conditions because an oxide layer forms on the surface and prevents further corrosion (passivation). Even so, pure native nickel is found in Earth's crust only in tiny amounts, usually in ultramafic rocks, and in the interiors of larger nickel–iron meteorites that were not exposed to oxygen when outside Earth's atmosphere. Meteoric nickel is found in combination with iron, a reflection of the origin of those elements as major end products of supernova nucleosynthesis. An iron–nickel mixture is thought to compose Earth's outer and inner cores. Use of nickel (as a natural meteoric nickel–iron alloy) has been traced as far back as 3500 BCE. Nickel was first isolated and classified as a chemical element in 1751 by Axel Fredrik Cronstedt, who initially mistook the ore for a copper mineral, in the cobalt mines of Los, Hälsingland, Sweden. The element's name comes from a mischievous sprite of German miner mythology, Nickel (similar to Old Nick), who personified the fact that copper-nickel ores resisted refinement into copper. An economically important source of nickel is the iron ore limonite, which often contains 1–2% nickel. Nickel's other important ore minerals include pentlandite and a mixture of Ni-rich natural silicates known as garnierite. Major production sites include the Sudbury region in Canada (which is thought to be of meteoric origin), New Caledonia in the Pacific, and Norilsk in Russia. Nickel is slowly oxidized by air at room temperature and is considered corrosion-resistant. Historically, it has been used for plating iron and brass, coating chemistry equipment, and manufacturing certain alloys that retain a high silvery polish, such as German silver. About 9% of world nickel production is still used for corrosion-resistant nickel plating. Nickel-plated objects sometimes provoke nickel allergy. Nickel has been widely used in coins, though its rising price has led to some replacement with cheaper metals in recent years. Nickel is one of four elements (the others are iron, cobalt, and gadolinium) that are ferromagnetic at approximately room temperature. Alnico permanent magnets based partly on nickel are of intermediate strength between iron-based permanent magnets and rare-earth magnets. The metal is valuable in modern times chiefly in alloys; about 68% of world production is used in stainless steel. A further 10% is used for nickel-based and copper-based alloys, 7% for alloy steels, 3% in foundries, 9% in plating and 4% in other applications, including the fast-growing battery sector. As a compound, nickel has a number of niche chemical manufacturing uses, such as a catalyst for hydrogenation, cathodes for batteries, pigments and metal surface treatments. Nickel is an essential nutrient for some microorganisms and plants that have enzymes with nickel as an active site. Nickel is a silvery-white metal with a slight golden tinge that takes a high polish. It is one of only four elements that are magnetic at or near room temperature, the others being iron, cobalt and gadolinium. Its Curie temperature is , meaning that bulk nickel is non-magnetic above this temperature. The unit cell of nickel is a face-centered cube with the lattice parameter of 0.352 nm, giving an atomic radius of 0.124 nm. This crystal structure is stable to pressures of at least 70 GPa. Nickel belongs to the transition metals. It is hard, malleable and ductile, and has a relatively high for transition metals electrical and thermal conductivity. The high compressive strength of 34 GPa, predicted for ideal crystals, is never obtained in the real bulk material due to the formation and movement of dislocations; however, it has been reached in Ni nanoparticles. The nickel atom has two electron configurations, [Ar] 3d8 4s2 and [Ar] 3d9 4s1, which are very close in energy – the symbol [Ar] refers to the argon-like core structure. There is some disagreement on which configuration has the lowest energy. Chemistry textbooks quote the electron configuration of nickel as [Ar] 4s2 3d8, which can also be written [Ar] 3d8 4s2. This configuration agrees with the Madelung energy ordering rule, which predicts that 4s is filled before 3d. It is supported by the experimental fact that the lowest energy state of the nickel atom is a 3d8 4s2 energy level, specifically the 3d8(3F) 4s2 3F, "J" = 4 level. However, each of these two configurations splits into several energy levels due to fine structure, and the two sets of energy levels overlap. The average energy of states with configuration [Ar] 3d9 4s1 is actually lower than the average energy of states with configuration [Ar] 3d8 4s2. For this reason, the research literature on atomic calculations quotes the ground state configuration of nickel as [Ar] 3d9 4s1. The isotopes of nickel range in atomic weight from 48 u () to 78 u (). Naturally occurring nickel is composed of five stable isotopes; , , , and , with being the most abundant (68.077% natural abundance). Isotopes heavier than cannot be formed by nuclear fusion without losing energy. Nickel-62 has the highest mean nuclear binding energy per nucleon of any nuclide, at 8.7946 MeV/nucleon. Its binding energy is greater than both and , more abundant elements often incorrectly cited as having the most tightly-bound nuclides. Although this would seem to predict nickel-62 as the most abundant heavy element in the universe, the relatively high rate of photodisintegration of nickel in stellar interiors causes iron to be by far the most abundant. The stable isotope nickel-60 is the daughter product of the extinct radionuclide , which decays with a half-life of 2.6 million years. Because has such a long half-life, its persistence in materials in the solar system may generate observable variations in the isotopic composition of . Therefore, the abundance of present in extraterrestrial material may provide insight into the origin of the solar system and its early history. At least 26 nickel radioisotopes have been characterised, the most stable being with a half-life of 76,000 years, with 100 years, and with 6 days. All of the remaining radioactive isotopes have half-lives that are less than 60 hours and the majority of these have half-lives that are less than 30 seconds. This element also has one meta state. Radioactive nickel-56 is produced by the silicon burning process and later set free in large quantities during type Ia supernovae. The shape of the light curve of these supernovae at intermediate to late-times corresponds to the decay via electron capture of nickel-56 to cobalt-56 and ultimately to iron-56. Nickel-59 is a long-lived cosmogenic radionuclide with a half-life of 76,000 years. has found many applications in isotope geology. has been used to date the terrestrial age of meteorites and to determine abundances of extraterrestrial dust in ice and sediment. Nickel-78's half-life was recently measured at 110 milliseconds, and is believed an important isotope in supernova nucleosynthesis of elements heavier than iron. The nuclide 48Ni, discovered in 1999, is the most proton-rich heavy element isotope known. With 28 protons and 20 neutrons, 48Ni is "doubly magic", as is with 28 protons and 50 neutrons. Both are therefore unusually stable for nuclides with so large a proton–neutron imbalance. On Earth, nickel occurs most often in combination with sulfur and iron in pentlandite, with sulfur in millerite, with arsenic in the mineral nickeline, and with arsenic and sulfur in nickel galena. Nickel is commonly found in iron meteorites as the alloys kamacite and taenite. The presence of nickel in meteorites was first detected in 1799 by Joseph-Louis Proust, a French chemist who then worked in Spain. Proust analyzed samples of the meteorite from Campo del Cielo (Argentina), which had been obtained in 1783 by Miguel Rubín de Celis, discovering the presence in them of nickel (about 10%) along with iron. The bulk of the nickel is mined from two types of ore deposits. The first is laterite, where the principal ore mineral mixtures are nickeliferous limonite, (Fe,Ni)O(OH), and garnierite (a mixture of various hydrous nickel and nickel-rich silicates). The second is magmatic sulfide deposits, where the principal ore mineral is pentlandite: . Australia and New Caledonia have the biggest estimate reserves, at 45% of world's total. Identified land-based resources throughout the world averaging 1% nickel or greater comprise at least 130 million tons of nickel (about the double of known reserves). About 60% is in laterites and 40% in sulfide deposits. On geophysical evidence, most of the nickel on Earth is believed to be in the Earth's outer and inner cores. Kamacite and taenite are naturally occurring alloys of iron and nickel. For kamacite, the alloy is usually in the proportion of 90:10 to 95:5, although impurities (such as cobalt or carbon) may be present, while for taenite the nickel content is between 20% and 65%. Kamacite and taenite are also found in nickel iron meteorites. The most common oxidation state of nickel is +2, but compounds of Ni0, Ni+, and Ni3+ are well known, and the exotic oxidation states Ni2−, Ni1−, and Ni4+ have been produced and studied. Nickel tetracarbonyl ), discovered by Ludwig Mond, is a volatile, highly toxic liquid at room temperature. On heating, the complex decomposes back to nickel and carbon monoxide: This behavior is exploited in the Mond process for purifying nickel, as described above. The related nickel(0) complex bis(cyclooctadiene)nickel(0) is a useful catalyst in organonickel chemistry because the cyclooctadiene (or "cod") ligands are easily displaced. Nickel(I) complexes are uncommon, but one example is the tetrahedral complex NiBr(PPh3)3. Many nickel(I) complexes feature Ni-Ni bonding, such as the dark red diamagnetic prepared by reduction of with sodium amalgam. This compound is oxidised in water, liberating . It is thought that the nickel(I) oxidation state is important to nickel-containing enzymes, such as [NiFe]-hydrogenase, which catalyzes the reversible reduction of protons to . Nickel(II) forms compounds with all common anions, including sulfide, sulfate, carbonate, hydroxide, carboxylates, and halides. Nickel(II) sulfate is produced in large quantities by dissolving nickel metal or oxides in sulfuric acid, forming both a hexa- and heptahydrates useful for electroplating nickel. Common salts of nickel, such as chloride, nitrate, and sulfate, dissolve in water to give green solutions of the metal aquo complex . The four halides form nickel compounds, which are solids with molecules that feature octahedral Ni centres. Nickel(II) chloride is most common, and its behavior is illustrative of the other halides. Nickel(II) chloride is produced by dissolving nickel or its oxide in hydrochloric acid. It is usually encountered as the green hexahydrate, the formula of which is usually written NiCl2•6H2O. When dissolved in water, this salt forms the metal aquo complex . Dehydration of NiCl2•6H2O gives the yellow anhydrous . Some tetracoordinate nickel(II) complexes, e.g. bis(triphenylphosphine)nickel chloride, exist both in tetrahedral and square planar geometries. The tetrahedral complexes are paramagnetic, whereas the square planar complexes are diamagnetic. In having properties of magnetic equilibrium and formation of octahedral complexes, they contrast with the divalent complexes of the heavier group 10 metals, palladium(II) and platinum(II), which form only square-planar geometry. Nickelocene is known; it has an electron count of 20, making it relatively unstable. Numerous Ni(III) compounds are known, with the first such examples being Nickel(III) trihalophosphines (NiIII(PPh3)X3). Further, Ni(III) forms simple salts with fluoride or oxide ions. Ni(III) can be stabilized by σ-donor ligands such as thiols and phosphines. Ni(IV) is present in the mixed oxide , while Ni(III) is present in nickel oxide hydroxide, which is used as the cathode in many rechargeable batteries, including nickel-cadmium, nickel-iron, nickel hydrogen, and nickel-metal hydride, and used by certain manufacturers in Li-ion batteries. Ni(IV) remains a rare oxidation state of nickel and very few compounds are known to date. Because the ores of nickel are easily mistaken for ores of silver, understanding of this metal and its use dates to relatively recent times. However, the unintentional use of nickel is ancient, and can be traced back as far as 3500 BCE. Bronzes from what is now Syria have been found to contain as much as 2% nickel. Some ancient Chinese manuscripts suggest that "white copper" (cupronickel, known as "baitong") was used there between 1700 and 1400 BCE. This Paktong white copper was exported to Britain as early as the 17th century, but the nickel content of this alloy was not discovered until 1822. Coins of nickel-copper alloy were minted by the Bactrian kings Agathocles, Euthydemus II and Pantaleon in the 2nd Century BCE, possibly out of the Chinese cupronickel. In medieval Germany, a red mineral was found in the Erzgebirge (Ore Mountains) that resembled copper ore. However, when miners were unable to extract any copper from it, they blamed a mischievous sprite of German mythology, Nickel (similar to "Old Nick"), for besetting the copper. They called this ore "Kupfernickel" from the German "Kupfer" for copper. This ore is now known to be nickeline, a nickel arsenide. In 1751, Baron Axel Fredrik Cronstedt tried to extract copper from kupfernickel at a cobalt mine in the Swedish village of Los, and instead produced a white metal that he named after the spirit that had given its name to the mineral, nickel. In modern German, Kupfernickel or Kupfer-Nickel designates the alloy cupronickel. Originally, the only source for nickel was the rare Kupfernickel. Beginning in 1824, nickel was obtained as a byproduct of cobalt blue production. The first large-scale smelting of nickel began in Norway in 1848 from nickel-rich pyrrhotite. The introduction of nickel in steel production in 1889 increased the demand for nickel, and the nickel deposits of New Caledonia, discovered in 1865, provided most of the world's supply between 1875 and 1915. The discovery of the large deposits in the Sudbury Basin, Canada in 1883, in Norilsk-Talnakh, Russia in 1920, and in the Merensky Reef, South Africa in 1924, made large-scale production of nickel possible. Aside from the aforementioned Bactrian coins, nickel was not a component of coins until the mid-19th century. 99.9% nickel five-cent coins were struck in Canada (the world's largest nickel producer at the time) during non-war years from 1922 to 1981; the metal content made these coins magnetic. During the wartime period 1942–45, most or all nickel was removed from Canadian and US coins to save it for manufacturing armor. Canada used 99.9% nickel from 1968 in its higher-value coins until 2000. Coins of nearly pure nickel were first used in 1881 in Switzerland. Birmingham forged nickel coins in about 1833 for trading in Malaya. In the United States, the term "nickel" or "nick" originally applied to the copper-nickel Flying Eagle cent, which replaced copper with 12% nickel 1857–58, then the Indian Head cent of the same alloy from 1859 to 1864. Still later, in 1865, the term designated the three-cent nickel, with nickel increased to 25%. In 1866, the five-cent shield nickel (25% nickel, 75% copper) appropriated the designation. Along with the alloy proportion, this term has been used to the present in the United States. In the 21st century, the high price of nickel has led to some replacement of the metal in coins around the world. Coins still made with nickel alloys include one- and two-euro coins, 5¢, 10¢, 25¢ and 50¢ U.S. coins, and 20p, 50p, £1 and £2 UK coins. Nickel-alloy in 5p and 10p UK coins was replaced with nickel-plated steel began in 2012, causing allergy problems for some people and public controversy. More than 2.3 million tonnes (t) of nickel per year are mined worldwide, with Indonesia (560,000 t), The Philippines (340,000 t), Russia (210,000 t), New Caledonia (210,000 t), Australia (170,000 t) and Canada (160,000 t) being the largest producers as of 2019. The largest deposits of nickel in non-Russian Europe are located in Finland and Greece. Identified land-based resources averaging 1% nickel or greater contain at least 130 million tonnes of nickel. Approximately 60% is in laterites and 40% is in sulfide deposits. In addition, extensive deep-sea resources of nickel are in manganese crusts and nodules covering large areas of the ocean floor, particularly in the Pacific Ocean. The one locality in the United States where nickel has been profitably mined is Riddle, Oregon, where several square miles of nickel-bearing garnierite surface deposits are located. The mine closed in 1987. The Eagle mine project is a new nickel mine in Michigan's upper peninsula. Construction was completed in 2013, and operations began in the third quarter of 2014. In the first full year of operation, Eagle Mine produced 18,000 t. Nickel is obtained through extractive metallurgy: it is extracted from the ore by conventional roasting and reduction processes that yield a metal of greater than 75% purity. In many stainless steel applications, 75% pure nickel can be used without further purification, depending on the impurities. Traditionally, most sulfide ores have been processed using pyrometallurgical techniques to produce a matte for further refining. Recent advances in hydrometallurgical techniques resulted in significantly purer metallic nickel product. Most sulfide deposits have traditionally been processed by concentration through a froth flotation process followed by pyrometallurgical extraction. In hydrometallurgical processes, nickel sulfide ores are concentrated with flotation (differential flotation if Ni/Fe ratio is too low) and then smelted. The nickel matte is further processed with the Sherritt-Gordon process. First, copper is removed by adding hydrogen sulfide, leaving a concentrate of cobalt and nickel. Then, solvent extraction is used to separate the cobalt and nickel, with the final nickel content greater than 99%. A second common refining process is leaching the metal matte into a nickel salt solution, followed by the electro-winning of the nickel from solution by plating it onto a cathode as electrolytic nickel. The purest metal is obtained from nickel oxide by the Mond process, which achieves a purity of greater than 99.99%. The process was patented by Ludwig Mond and has been in industrial use since before the beginning of the 20th century. In this process, nickel is reacted with carbon monoxide in the presence of a sulfur catalyst at around 40–80 °C to form nickel carbonyl. Iron gives iron pentacarbonyl, too, but this reaction is slow. If necessary, the nickel may be separated by distillation. Dicobalt octacarbonyl is also formed in nickel distillation as a by-product, but it decomposes to tetracobalt dodecacarbonyl at the reaction temperature to give a non-volatile solid. Nickel is obtained from nickel carbonyl by one of two processes. It may be passed through a large chamber at high temperatures in which tens of thousands of nickel spheres, called pellets, are constantly stirred. The carbonyl decomposes and deposits pure nickel onto the nickel spheres. In the alternate process, nickel carbonyl is decomposed in a smaller chamber at 230 °C to create a fine nickel powder. The byproduct carbon monoxide is recirculated and reused. The highly pure nickel product is known as "carbonyl nickel". The market price of nickel surged throughout 2006 and the early months of 2007; as of April 5, 2007, the metal was trading at US$52,300/tonne or $1.47/oz. The price subsequently fell dramatically, and as of September 2017, the metal was trading at $11,000/tonne, or $0.31/oz. The US nickel coin contains of nickel, which at the April 2007 price was worth 6.5 cents, along with 3.75 grams of copper worth about 3 cents, with a total metal value of more than 9 cents. Since the face value of a nickel is 5 cents, this made it an attractive target for melting by people wanting to sell the metals at a profit. However, the United States Mint, in anticipation of this practice, implemented new interim rules on December 14, 2006, subject to public comment for 30 days, which criminalized the melting and export of cents and nickels. Violators can be punished with a fine of up to $10,000 and/or imprisoned for a maximum of five years. As of September 19, 2013, the melt value of a US nickel (copper and nickel included) is $0.045, which is 90% of the face value. The global production of nickel is presently used as follows: 68% in stainless steel; 10% in nonferrous alloys; 9% in electroplating; 7% in alloy steel; 3% in foundries; and 4% other uses (including batteries). Nickel is used in many specific and recognizable industrial and consumer products, including stainless steel, alnico magnets, coinage, rechargeable batteries, electric guitar strings, microphone capsules, plating on plumbing fixtures, and special alloys such as permalloy, elinvar, and invar. It is used for plating and as a green tint in glass. Nickel is preeminently an alloy metal, and its chief use is in nickel steels and nickel cast irons, in which it typically increases the tensile strength, toughness, and elastic limit. It is widely used in many other alloys, including nickel brasses and bronzes and alloys with copper, chromium, aluminium, lead, cobalt, silver, and gold (Inconel, Incoloy, Monel, Nimonic). Because it is resistant to corrosion, nickel was occasionally used as a substitute for decorative silver. Nickel was also occasionally used in some countries after 1859 as a cheap coinage metal (see above), but in the later years of the 20th century, it was replaced by cheaper stainless steel (i.e., iron) alloys, except in the United States and Canada. Nickel is an excellent alloying agent for certain precious metals and is used in the fire assay as a collector of platinum group elements (PGE). As such, nickel is capable of fully collecting all six PGE elements from ores, and of partially collecting gold. High-throughput nickel mines may also engage in PGE recovery (primarily platinum and palladium); examples are Norilsk in Russia and the Sudbury Basin in Canada. Nickel foam or nickel mesh is used in gas diffusion electrodes for alkaline fuel cells. Nickel and its alloys are frequently used as catalysts for hydrogenation reactions. Raney nickel, a finely divided nickel-aluminium alloy, is one common form, though related catalysts are also used, including Raney-type catalysts. Nickel is a naturally magnetostrictive material, meaning that, in the presence of a magnetic field, the material undergoes a small change in length. The magnetostriction of nickel is on the order of 50 ppm and is negative, indicating that it contracts. Nickel is used as a binder in the cemented tungsten carbide or hardmetal industry and used in proportions of 6% to 12% by weight. Nickel makes the tungsten carbide magnetic and adds corrosion-resistance to the cemented parts, although the hardness is less than those with a cobalt binder. , with its half-life of 100.1 years, is useful in krytron devices as a beta particle (high-speed electron) emitter to make ionization by the keep-alive electrode more reliable. Around 27% of all nickel production is destined for engineering, 10% for building and construction, 14% for tubular products, 20% for metal goods, 14% for transport, 11% for electronic goods, and 5% for other uses. Raney nickel is widely used for hydrogenation of unsaturated oils to make margarine, and substandard margarine and leftover oil may contain nickel as contaminant. Forte et al. found that type 2 diabetic patients have 0.89 ng/ml of Ni in the blood relative to 0.77 ng/ml in the control subjects. Although not recognized until the 1970s, nickel is known to play an important role in the biology of some plants, eubacteria, archaebacteria, and fungi. Nickel enzymes such as urease are considered virulence factors in some organisms. Urease catalyzes the hydrolysis of urea to form ammonia and carbamate. The NiFe hydrogenases can catalyze the oxidation of to form protons and electrons, and can also catalyze the reverse reaction, the reduction of protons to form hydrogen gas. A nickel-tetrapyrrole coenzyme, cofactor F430, is present in methyl coenzyme M reductase, which can catalyze the formation of methane, or the reverse reaction, in methanogenic archaea. One of the carbon monoxide dehydrogenase enzymes consists of an Fe-Ni-S cluster. Other nickel-bearing enzymes include a rare bacterial class of superoxide dismutase and glyoxalase I enzymes in bacteria and several parasitic eukaryotic trypanosomal parasites (in higher organisms, including yeast and mammals, this enzyme contains divalent Zn2+). Dietary nickel may affect human health through infections by nickel-dependent bacteria, but it is also possible that nickel is an essential nutrient for bacteria residing in the large intestine, in effect functioning as a prebiotic. The US Institute of Medicine has not confirmed that nickel is an essential nutrient for humans, so neither a Recommended Dietary Allowance (RDA) nor an Adequate Intake have been established. The Tolerable Upper Intake Level of dietary nickel is 1000 µg/day as soluble nickel salts. Dietary intake is estimated at 70 to 100 µg/day, with less than 10% absorbed. What is absorbed is excreted in urine. Relatively large amounts of nickel – comparable to the estimated average ingestion above – leach into food cooked in stainless steel. For example, the amount of nickel leached after 10 cooking cycles into one serving of tomato sauce averages 88 µg. Nickel released from Siberian Traps volcanic eruptions is suspected of assisting the growth of "Methanosarcina", a genus of euryarchaeote archaea that produced methane during the Permian–Triassic extinction event, the biggest extinction event on record. The major source of nickel exposure is oral consumption, as nickel is essential to plants. Nickel is found naturally in both food and water, and may be increased by human pollution. For example, nickel-plated faucets may contaminate water and soil; mining and smelting may dump nickel into waste-water; nickel–steel alloy cookware and nickel-pigmented dishes may release nickel into food. The atmosphere may be polluted by nickel ore refining and fossil fuel combustion. Humans may absorb nickel directly from tobacco smoke and skin contact with jewelry, shampoos, detergents, and coins. A less-common form of chronic exposure is through hemodialysis as traces of nickel ions may be absorbed into the plasma from the chelating action of albumin. The average daily exposure does not pose a threat to human health. Most of the nickel absorbed every day by humans is removed by the kidneys and passed out of the body through urine or is eliminated through the gastrointestinal tract without being absorbed. Nickel is not a cumulative poison, but larger doses or chronic inhalation exposure may be toxic, even carcinogenic, and constitute an occupational hazard. Nickel compounds are classified as human carcinogens based on increased respiratory cancer risks observed in epidemiological studies of sulfidic ore refinery workers. This is supported by the positive results of the NTP bioassays with Ni sub-sulfide and Ni oxide in rats and mice. The human and animal data consistently indicate a lack of carcinogenicity via the oral route of exposure and limit the carcinogenicity of nickel compounds to respiratory tumours after inhalation. Nickel metal is classified as a suspect carcinogen; there is consistency between the absence of increased respiratory cancer risks in workers predominantly exposed to metallic nickel and the lack of respiratory tumours in a rat lifetime inhalation carcinogenicity study with nickel metal powder. In the rodent inhalation studies with various nickel compounds and nickel metal, increased lung inflammations with and without bronchial lymph node hyperplasia or fibrosis were observed. In rat studies, oral ingestion of water-soluble nickel salts can trigger perinatal mortality effects in pregnant animals. Whether these effects are relevant to humans is unclear as epidemiological studies of highly exposed female workers have not shown adverse developmental toxicity effects. People can be exposed to nickel in the workplace by inhalation, ingestion, and contact with skin or eye. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for the workplace at 1 mg/m3 per 8-hour workday, excluding nickel carbonyl. The National Institute for Occupational Safety and Health (NIOSH) specifies the recommended exposure limit (REL) of 0.015 mg/m3 per 8-hour workday. At 10 mg/m3, nickel is immediately dangerous to life and health. Nickel carbonyl [] is an extremely toxic gas. The toxicity of metal carbonyls is a function of both the toxicity of the metal and the off-gassing of carbon monoxide from the carbonyl functional groups; nickel carbonyl is also explosive in air. Sensitized individuals may show a skin contact allergy to nickel known as a contact dermatitis. Highly sensitized individuals may also react to foods with high nickel content. Sensitivity to nickel may also be present in patients with pompholyx. Nickel is the top confirmed contact allergen worldwide, partly due to its use in jewelry for pierced ears. Nickel allergies affecting pierced ears are often marked by itchy, red skin. Many earrings are now made without nickel or low-release nickel to address this problem. The amount allowed in products that contact human skin is now regulated by the European Union. In 2002, researchers found that the nickel released by 1 and 2 Euro coins was far in excess of those standards. This is believed to be the result of a galvanic reaction. Nickel was voted Allergen of the Year in 2008 by the American Contact Dermatitis Society. In August 2015, the American Academy of Dermatology adopted a position statement on the safety of nickel: "Estimates suggest that contact dermatitis, which includes nickel sensitization, accounts for approximately $1.918 billion and affects nearly 72.29 million people." Reports show that both the nickel-induced activation of hypoxia-inducible factor (HIF-1) and the up-regulation of hypoxia-inducible genes are caused by depletion of intracellular ascorbate. The addition of ascorbate to the culture medium increased the intracellular ascorbate level and reversed both the metal-induced stabilization of HIF-1- and HIF-1α-dependent gene expression.
https://en.wikipedia.org/wiki?curid=21274
Niobium Niobium, also known as columbium, is a chemical element with the symbol Nb (formerly Cb) and atomic number 41. Niobium is a light grey, crystalline, and ductile transition metal. Pure niobium has a hardness similar to that of pure titanium, and it has similar ductility to iron. Niobium oxidizes in the earth's atmosphere very slowly, hence its application in jewelry as a hypoallergenic alternative to nickel. Niobium is often found in the minerals pyrochlore and columbite, hence the former name "columbium". Its name comes from Greek mythology, specifically Niobe, who was the daughter of Tantalus, the namesake of tantalum. The name reflects the great similarity between the two elements in their physical and chemical properties, making them difficult to distinguish. The English chemist Charles Hatchett reported a new element similar to tantalum in 1801 and named it columbium. In 1809, the English chemist William Hyde Wollaston wrongly concluded that tantalum and columbium were identical. The German chemist Heinrich Rose determined in 1846 that tantalum ores contain a second element, which he named niobium. In 1864 and 1865, a series of scientific findings clarified that niobium and columbium were the same element (as distinguished from tantalum), and for a century both names were used interchangeably. Niobium was officially adopted as the name of the element in 1949, but the name columbium remains in current use in metallurgy in the United States. It was not until the early 20th century that niobium was first used commercially. Brazil is the leading producer of niobium and ferroniobium, an alloy of 60–70% niobium with iron. Niobium is used mostly in alloys, the largest part in special steel such as that used in gas pipelines. Although these alloys contain a maximum of 0.1%, the small percentage of niobium enhances the strength of the steel. The temperature stability of niobium-containing superalloys is important for its use in jet and rocket engines. Niobium is used in various superconducting materials. These superconducting alloys, also containing titanium and tin, are widely used in the superconducting magnets of MRI scanners. Other applications of niobium include welding, nuclear industries, electronics, optics, numismatics, and jewelry. In the last two applications, the low toxicity and iridescence produced by anodization are highly desired properties. Niobium is considered a technology-critical element. Niobium was identified by English chemist Charles Hatchett in 1801. He found a new element in a mineral sample that had been sent to England from Connecticut, United States in 1734 by John Winthrop F.R.S. (grandson of John Winthrop the Younger) and named the mineral "columbite" and the new element "columbium" after "Columbia", the poetical name for the United States. The "columbium" discovered by Hatchett was probably a mixture of the new element with tantalum. Subsequently, there was considerable confusion over the difference between columbium (niobium) and the closely related tantalum. In 1809, English chemist William Hyde Wollaston compared the oxides derived from both columbium—columbite, with a density 5.918 g/cm3, and tantalum—tantalite, with a density over 8 g/cm3, and concluded that the two oxides, despite the significant difference in density, were identical; thus he kept the name tantalum. This conclusion was disputed in 1846 by German chemist Heinrich Rose, who argued that there were two different elements in the tantalite sample, and named them after children of Tantalus: "niobium" (from Niobe) and "pelopium" (from Pelops). This confusion arose from the minimal observed differences between tantalum and niobium. The claimed new elements "pelopium", "ilmenium", and "dianium" were in fact identical to niobium or mixtures of niobium and tantalum. The differences between tantalum and niobium were unequivocally demonstrated in 1864 by Christian Wilhelm Blomstrand and Henri Etienne Sainte-Claire Deville, as well as Louis J. Troost, who determined the formulas of some of the compounds in 1865 and finally by Swiss chemist Jean Charles Galissard de Marignac in 1866, who all proved that there were only two elements. Articles on "ilmenium" continued to appear until 1871. De Marignac was the first to prepare the metal in 1864, when he reduced niobium chloride by heating it in an atmosphere of hydrogen. Although de Marignac was able to produce tantalum-free niobium on a larger scale by 1866, it was not until the early 20th century that niobium was used in incandescent lamp filaments, the first commercial application. This use quickly became obsolete through the replacement of niobium with tungsten, which has a higher melting point. That niobium improves the strength of steel was first discovered in the 1920s, and this application remains its predominant use. In 1961, the American physicist Eugene Kunzler and coworkers at Bell Labs discovered that niobium-tin continues to exhibit superconductivity in the presence of strong electric currents and magnetic fields, making it the first material to support the high currents and fields necessary for useful high-power magnets and electrical power machinery. This discovery enabled – two decades later – the production of long multi-strand cables wound into coils to create large, powerful electromagnets for rotating machinery, particle accelerators, and particle detectors. "Columbium" (symbol "Cb") was the name originally bestowed by Hatchett upon his discovery of the metal in 1801. The name reflected that the type specimen of the ore came from America (Columbia). This name remained in use in American journals—the last paper published by American Chemical Society with "columbium" in its title dates from 1953—while "niobium" was used in Europe. To end this confusion, the name "niobium" was chosen for element 41 at the 15th Conference of the Union of Chemistry in Amsterdam in 1949. A year later this name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) after 100 years of controversy, despite the chronological precedence of the name "columbium". This was a compromise of sorts; the IUPAC accepted tungsten instead of wolfram in deference to North American usage; and "niobium" instead of "columbium" in deference to European usage. While many US chemical societies and government organizations typically use the official IUPAC name, some metallurgists and metal societies still use the original American name, ""columbium"". Niobium is a lustrous, grey, ductile, paramagnetic metal in group 5 of the periodic table (see table), with an electron configuration in the outermost shells atypical for group 5. (This can be observed in the neighborhood of ruthenium (44), rhodium (45), and palladium (46).) Although it is thought to have a body-centered cubic crystal structure from absolute zero to its melting point, high-resolution measurements of the thermal expansion along the three crystallographic axes reveal anisotropies which are inconsistent with a cubic structure. Therefore, further research and discovery in this area is expected. Niobium becomes a superconductor at cryogenic temperatures. At atmospheric pressure, it has the highest critical temperature of the elemental superconductors at 9.2 K. Niobium has the greatest magnetic penetration depth of any element. In addition, it is one of the three elemental Type II superconductors, along with vanadium and technetium. The superconductive properties are strongly dependent on the purity of the niobium metal. When very pure, it is comparatively soft and ductile, but impurities make it harder. The metal has a low capture cross-section for thermal neutrons; thus it is used in the nuclear industries where neutron transparent structures are desired. The metal takes on a bluish tinge when exposed to air at room temperature for extended periods. Despite a high melting point in elemental form (2,468 °C), it has a lower density than other refractory metals. Furthermore, it is corrosion-resistant, exhibits superconductivity properties, and forms dielectric oxide layers. Niobium is slightly less electropositive and more compact than its predecessor in the periodic table, zirconium, whereas it is virtually identical in size to the heavier tantalum atoms, as a result of the lanthanide contraction. As a result, niobium's chemical properties are very similar to those for tantalum, which appears directly below niobium in the periodic table. Although its corrosion resistance is not as outstanding as that of tantalum, the lower price and greater availability make niobium attractive for less demanding applications, such as vat linings in chemical plants. Niobium in the Earth's crust comprises one stable isotope, 93Nb. By 2003, at least 32 radioisotopes had been synthesized, ranging in atomic mass from 81 to 113. The most stable of these is 92Nb with a half-life of 34.7 million years. One of the least stable is 113Nb, with an estimated half-life of 30 milliseconds. Isotopes that are lighter than the stable 93Nb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. 81Nb, 82Nb, and 84Nb have minor β+ delayed proton emission decay paths, 91Nb decays by electron capture and positron emission, and 92Nb decays by both β+ and β− decay. At least 25 nuclear isomers have been described, ranging in atomic mass from 84 to 104. Within this range, only 96Nb, 101Nb, and 103Nb do not have isomers. The most stable of niobium's isomers is 93mNb with a half-life of 16.13 years. The least stable isomer is 84mNb with a half-life of 103 ns. All of niobium's isomers decay by isomeric transition or beta decay except 92m1Nb, which has a minor electron capture branch. Niobium is estimated to be the 34th most common element in the Earth's crust, with 20 ppm. Some think that the abundance on Earth is much greater, and that the element's high density has concentrated it in the Earth's core. The free element is not found in nature, but niobium occurs in combination with other elements in minerals. Minerals that contain niobium often also contain tantalum. Examples include columbite ((Fe,Mn)(Nb,Ta)2O6) and columbite–tantalite (or "coltan", (Fe,Mn)(Ta,Nb)2O6). Columbite–tantalite minerals (the most common species being columbite-(Fe) and tantalite-(Fe), where "-(Fe)" is the Levinson suffix informing about the prevailence of iron over other elements like manganese) are most usually found as accessory minerals in pegmatite intrusions, and in alkaline intrusive rocks. Less common are the niobates of calcium, uranium, thorium and the rare earth elements. Examples of such niobates are pyrochlore ((Na,Ca)2Nb2O6(OH,F)) (now a group name, with a relatively common example being, e.g., fluorcalciopyrochlore) and euxenite (correctly named euxenite-(Y)) ((Y,Ca,Ce,U,Th)(Nb,Ta,Ti)2O6). These large deposits of niobium have been found associated with carbonatites (carbonate-silicate igneous rocks) and as a constituent of pyrochlore. The three largest currently mined deposits of pyrochlore, two in Brazil and one in Canada, were found in the 1950s, and are still the major producers of niobium mineral concentrates. The largest deposit is hosted within a carbonatite intrusion in Araxá, state of Minas Gerais, Brazil, owned by CBMM (Companhia Brasileira de Metalurgia e Mineração); the other active Brazilian deposit is located near Catalão, state of Goiás, and owned by China Molybdenum, also hosted within a carbonatite intrusion. Together, those two mines produce about 88% of the world's supply. Brazil also has a large but still unexploited deposit near São Gabriel da Cachoeira, state of Amazonas, as well as a few smaller deposits, notably in the state of Roraima. The third largest producer of niobium is the carbonatite-hosted Niobec mine, in Saint-Honoré, near Chicoutimi, Quebec, Canada, owned by Magris Resources. It produces between 7% and 10% of the world's supply. After the separation from the other minerals, the mixed oxides of tantalum Ta2O5 and niobium Nb2O5 are obtained. The first step in the processing is the reaction of the oxides with hydrofluoric acid: The first industrial scale separation, developed by de Marignac, exploits the differing solubilities of the complex niobium and tantalum fluorides, dipotassium oxypentafluoroniobate monohydrate (K2[NbOF5]·H2O) and dipotassium heptafluorotantalate (K2[TaF7]) in water. Newer processes use the liquid extraction of the fluorides from aqueous solution by organic solvents like cyclohexanone. The complex niobium and tantalum fluorides are extracted separately from the organic solvent with water and either precipitated by the addition of potassium fluoride to produce a potassium fluoride complex, or precipitated with ammonia as the pentoxide: Followed by: Several methods are used for the reduction to metallic niobium. The electrolysis of a molten mixture of K2[NbOF5] and sodium chloride is one; the other is the reduction of the fluoride with sodium. With this method, a relatively high purity niobium can be obtained. In large scale production, Nb2O5 is reduced with hydrogen or carbon. In the aluminothermic reaction, a mixture of iron oxide and niobium oxide is reacted with aluminium: Small amounts of oxidizers like sodium nitrate are added to enhance the reaction. The result is aluminium oxide and ferroniobium, an alloy of iron and niobium used in steel production. Ferroniobium contains between 60 and 70% niobium. Without iron oxide, the aluminothermic process is used to produce niobium. Further purification is necessary to reach the grade for superconductive alloys. Electron beam melting under vacuum is the method used by the two major distributors of niobium. , CBMM from Brazil controlled 85 percent of the world's niobium production. The United States Geological Survey estimates that the production increased from 38,700 tonnes in 2005 to 44,500 tonnes in 2006. Worldwide resources are estimated to be 4,400,000 tonnes. During the ten-year period between 1995 and 2005, the production more than doubled, starting from 17,800 tonnes in 1995. Between 2009 and 2011, production was stable at 63,000 tonnes per year, with a slight decrease in 2012 to only 50,000 tonnes per year. Lesser amounts are found in Malawi's Kanyika Deposit (Kanyika mine). In many ways, niobium is similar to tantalum and zirconium. It reacts with most nonmetals at high temperatures; with fluorine at room temperature; with chlorine at 150 °C and hydrogen at 200 °C; and with nitrogen at 400 °C, with products that are frequently interstitial and nonstoichiometric. The metal begins to oxidize in air at 200 °C. It resists corrosion by fused alkalis and by acids, including aqua regia, hydrochloric, sulfuric, nitric and phosphoric acids. Niobium is attacked by hydrofluoric acid and hydrofluoric/nitric acid mixtures. Although niobium exhibits all of the formal oxidation states from +5 to −1, the most common compounds have niobium in the +5 state. Characteristically, compounds in oxidation states less than 5+ display Nb–Nb bonding. In aqueous solutions, niobium only exhibit the +5 oxidation state. It is also readily prone to hydrolysis and is barely soluble in dilute solutions of hydrochloric, sulfuric, nitric and phosphoric acids due to the precipitation of hydrous Nb oxide. Nb(V) is also slightly soluble in alkaline media due to the formation of soluble polyoxoniobate species. Niobium forms oxides in the oxidation states +5 (Nb2O5), +4 (NbO2), +3 (), and the rarer oxidation state, +2 (NbO). Most common is the pentoxide, precursor to almost all niobium compounds and alloys. Niobates are generated by dissolving the pentoxide in basic hydroxide solutions or by melting it in alkali metal oxides. Examples are lithium niobate (LiNbO3) and lanthanum niobate (LaNbO4). In the lithium niobate is a trigonally distorted perovskite-like structure, whereas the lanthanum niobate contains lone ions. The layered niobium sulfide (NbS2) is also known. Materials can be coated with a thin film of niobium(V) oxide chemical vapor deposition or atomic layer deposition processes, produced by the thermal decomposition of niobium(V) ethoxide above 350 °C. Niobium forms halides in the oxidation states of +5 and +4 as well as diverse substoichiometric compounds. The pentahalides () feature octahedral Nb centres. Niobium pentafluoride (NbF5) is a white solid with a melting point of 79.0 °C and niobium pentachloride (NbCl5) is yellow (see image at left) with a melting point of 203.4 °C. Both are hydrolyzed to give oxides and oxyhalides, such as NbOCl3. The pentachloride is a versatile reagent used to generate the organometallic compounds, such as niobocene dichloride (). The tetrahalides () are dark-coloured polymers with Nb-Nb bonds; for example, the black hygroscopic niobium tetrafluoride (NbF4) and brown niobium tetrachloride (NbCl4). Anionic halide compounds of niobium are well known, owing in part to the Lewis acidity of the pentahalides. The most important is [NbF7]2−, an intermediate in the separation of Nb and Ta from the ores. This heptafluoride tends to form the oxopentafluoride more readily than does the tantalum compound. Other halide complexes include octahedral [NbCl6]−: As with other metals with low atomic numbers, a variety of reduced halide cluster ions is known, the prime example being [Nb6Cl18]4−. Other binary compounds of niobium include niobium nitride (NbN), which becomes a superconductor at low temperatures and is used in detectors for infrared light. The main niobium carbide is NbC, an extremely hard, refractory, ceramic material, commercially used in cutting tool bits. Out of 44,500 tonnes of niobium mined in 2006, an estimated 90% was used in high-grade structural steel. The second largest application is superalloys. Niobium alloy superconductors and electronic components account for a very small share of the world production. Niobium is an effective microalloying element for steel, within which it forms niobium carbide and niobium nitride. These compounds improve the grain refining, and retard recrystallization and precipitation hardening. These effects in turn increase the toughness, strength, formability, and weldability. Within microalloyed stainless steels, the niobium content is a small (less than 0.1%) but important addition to high strength low alloy steels that are widely used structurally in modern automobiles. Niobium is sometimes used in considerably higher quantities for highly wear-resistant machine components and knives, as high as 3% in Crucible CPM S110V stainless steel. These same niobium alloys are often used in pipeline construction. Quantities of niobium are used in nickel-, cobalt-, and iron-based superalloys in proportions as great as 6.5% for such applications as jet engine components, gas turbines, rocket subassemblies, turbo charger systems, heat resisting, and combustion equipment. Niobium precipitates a hardening γ"-phase within the grain structure of the superalloy. One example superalloy is Inconel 718, consisting of roughly 50% nickel, 18.6% chromium, 18.5% iron, 5% niobium, 3.1% molybdenum, 0.9% titanium, and 0.4% aluminium. These superalloys were used, for example, in advanced air frame systems for the Gemini program. Another niobium alloy was used for the nozzle of the Apollo Service Module. Because niobium is oxidized at temperatures above 400 °C, a protective coating is necessary for these applications to prevent the alloy from becoming brittle. C-103 alloy was developed in the early 1960s jointly by the Wah Chang Corporation and Boeing Co. DuPont, Union Carbide Corp., General Electric Co. and several other companies were developing Nb-base alloys simultaneously, largely driven by the Cold War and Space Race. It is composed of 89% niobium, 10% hafnium and 1% titanium and is used for liquid rocket thruster nozzles, such as the main engine of the Apollo Lunar Modules. The nozzle of the Merlin Vacuum series of engines developed by SpaceX for the upper stage of its Falcon 9 rocket is made from a niobium alloy. The reactivity of niobium with oxygen requires it to be worked in a vacuum or inert atmosphere, which significantly increases the cost and difficulty of production. Vacuum arc remelting (VAR) and electron beam melting (EBM), novel processes at the time, enabled the development of niobium and other reactive metals. The project that yielded C-103 began in 1959 with as many as 256 experimental niobium alloys in the "C-series" (possibly from columbium) that could be melted as buttons and rolled into sheet. Wah Chang had an inventory of hafnium, refined from nuclear-grade zirconium alloys, that it wanted to put to commercial use. The 103rd experimental composition of the C-series alloys, Nb-10Hf-1Ti, had the best combination of formability and high-temperature properties. Wah Chang fabricated the first 500-lb heat of C-103 in 1961, ingot to sheet, using EBM and VAR. The intended applications included turbine engines and liquid metal heat exchangers. Competing niobium alloys from that era included FS85 (Nb-10W-28Ta-1Zr) from Fansteel Metallurgical Corp., Cb129Y (Nb-10W-10Hf-0.2Y) from Wah Chang and Boeing, Cb752 (Nb-10W-2.5Zr) from Union Carbide, and Nb1Zr from Superior Tube Co. Niobium-germanium (), niobium-tin (), as well as the niobium-titanium alloys are used as a type II superconductor wire for superconducting magnets. These superconducting magnets are used in magnetic resonance imaging and nuclear magnetic resonance instruments as well as in particle accelerators. For example, the Large Hadron Collider uses 600 tons of superconducting strands, while the International Thermonuclear Experimental Reactor uses an estimated 600 tonnes of Nb3Sn strands and 250 tonnes of NbTi strands. In 1992 alone, more than US$1 billion worth of clinical magnetic resonance imaging systems were constructed with niobium-titanium wire. The superconducting radio frequency (SRF) cavities used in the free-electron lasers FLASH (result of the cancelled TESLA linear accelerator project) and XFEL are made from pure niobium. A cryomodule team at Fermilab used the same SRF technology from the FLASH project to develop 1.3 GHz nine-cell SRF cavities made from pure niobium. The cavities will be used in the linear particle accelerator of the International Linear Collider. The same technology will be used in LCLS-II at SLAC National Accelerator Laboratory and PIP-II at Fermilab. The high sensitivity of superconducting niobium nitride bolometers make them an ideal detector for electromagnetic radiation in the THz frequency band. These detectors were tested at the Submillimeter Telescope, the South Pole Telescope, the Receiver Lab Telescope, and at APEX, and are now used in the HIFI instrument on board the Herschel Space Observatory. Lithium niobate, which is a ferroelectric, is used extensively in mobile telephones and optical modulators, and for the manufacture of surface acoustic wave devices. It belongs to the ABO3 structure ferroelectrics like lithium tantalate and barium titanate. Niobium capacitors are available as alternative to tantalum capacitors, but tantalum capacitors still predominate. Niobium is added to glass to obtain a higher refractive index, making possible thinner and lighter corrective glasses. Niobium and some niobium alloys are physiologically inert and hypoallergenic. For this reason, niobium is used in prosthetics and implant devices, such as pacemakers. Niobium treated with sodium hydroxide forms a porous layer that aids osseointegration. Like titanium, tantalum, and aluminium, niobium can be heated and anodized ("reactive metal anodization") to produce a wide array of iridescent colours for jewelry, where its hypoallergenic property is highly desirable. Niobium is used as a precious metal in commemorative coins, often with silver or gold. For example, Austria produced a series of silver niobium euro coins starting in 2003; the colour in these coins is created by the diffraction of light by a thin anodized oxide layer. In 2012, ten coins are available showing a broad variety of colours in the centre of the coin: blue, green, brown, purple, violet, or yellow. Two more examples are the 2004 Austrian €25 150 Years Semmering Alpine Railway commemorative coin, and the 2006 Austrian €25 European Satellite Navigation commemorative coin. The Austrian mint produced for Latvia a similar series of coins starting in 2004, with one following in 2007. In 2011, the Royal Canadian Mint started production of a $5 sterling silver and niobium coin named "Hunter's Moon" in which the niobium was selectively oxidized, thus creating unique finishes where no two coins are exactly alike. The arc-tube seals of high pressure sodium vapor lamps are made from niobium, sometimes alloyed with 1% of zirconium; niobium has a very similar coefficient of thermal expansion, matching the sintered alumina arc tube ceramic, a translucent material which resists chemical attack or reduction by the hot liquid sodium and sodium vapour contained inside the operating lamp. Niobium is used in arc welding rods for some stabilized grades of stainless steel and in anodes for cathodic protection systems on some water tanks, which are then usually plated with platinum. Niobium is an important component of high-performance heterogeneous catalysts for the production of acrylic acid by selective oxidation of propane. Niobium is used to make the high voltage wire of the solar corona particles receptor module of the Parker Solar Probe. Niobium has no known biological role. While niobium dust is an eye and skin irritant and a potential fire hazard, elemental niobium on a larger scale is physiologically inert (and thus hypoallergenic) and harmless. It is frequently used in jewelry and has been tested for use in some medical implants. Niobium-containing compounds are rarely encountered by most people, but some are toxic and should be treated with care. The short- and long-term exposure to niobates and niobium chloride, two chemicals that are water-soluble, have been tested in rats. Rats treated with a single injection of niobium pentachloride or niobates show a median lethal dose (LD50) between 10 and 100 mg/kg. For oral administration the toxicity is lower; a study with rats yielded a LD50 after seven days of 940 mg/kg.
https://en.wikipedia.org/wiki?curid=21275
Neodymium Neodymium is a chemical element with the symbol Nd and atomic number 60. Neodymium belongs to the lanthanide series and is a rare-earth element. It is a hard, slightly malleable silvery metal that quickly tarnishes in air and moisture. When oxidized, neodymium reacts quickly to produce pink, purple/blue and yellow compounds in the +2, +3 and +4 oxidation states. Neodymium was discovered in 1885 by the Austrian chemist Carl Auer von Welsbach. It is present in significant quantities in the ore minerals monazite and bastnäsite. Neodymium is not found naturally in metallic form or unmixed with other lanthanides, and it is usually refined for general use. Although neodymium is classed as a rare-earth element, it is fairly common, no rarer than cobalt, nickel, or copper, and is widely distributed in the Earth's crust. Most of the world's commercial neodymium is mined in China. Neodymium compounds were first commercially used as glass dyes in 1927, and they remain a popular additive in glasses. The color of neodymium compounds is due to the Nd3+ ion and is often a reddish-purple, but it changes with the type of lighting, because of the interaction of the sharp light absorption bands of neodymium with ambient light enriched with the sharp visible emission bands of mercury, trivalent europium or terbium. Some neodymium-doped glasses are used in lasers that emit infrared with wavelengths between 1047 and 1062 nanometers. These have been used in extremely-high-power applications, such as experiments in inertial confinement fusion. Neodymium is also used with various other substrate crystals, such as yttrium aluminium garnet in the . Another important use of neodymium is as a component in the alloys used to make high-strength neodymium magnets—powerful permanent magnets. These magnets are widely used in such products as microphones, professional loudspeakers, in-ear headphones, high performance hobby DC electric motors, and computer hard disks, where low magnet mass (or volume) or strong magnetic fields are required. Larger neodymium magnets are used in high-power-versus-weight electric motors (for example in hybrid cars) and generators (for example aircraft and wind turbine electric generators). Neodymium, a rare-earth metal, was present in the classical mischmetal at a concentration of about 18%. Metallic neodymium has a bright, silvery metallic luster. Neodymium commonly exists in two allotropic forms, with a transformation from a double hexagonal to a body-centered cubic structure taking place at about 863 °C. Neodymium is paramagnetic at room temperature and becomes an antiferromagnet upon cooling to . In order to make the neodymium magnets it is alloyed with iron, which is a ferromagnet. Neodymium metal quickly oxidizes at ambient conditions and readily burns at about 150 °C to form neodymium(III) oxide; the oxide peels off, exposing the bulk metal to the further oxidation: Neodymium is a quite electropositive element, and it reacts slowly with cold water but quite quickly with hot water to form neodymium(III) hydroxide: Neodymium metal reacts vigorously with all the halogens: Neodymium dissolves readily in dilute sulfuric acid to form solutions that contain the lilac Nd(III) ion. These exist as a [Nd(OH2)9]3+ complexes: Neodymium compounds include Some neodymium compounds have colors that vary based upon the type of lighting. Naturally occurring neodymium is a mixture of five stable isotopes, 142Nd, 143Nd, 145Nd, 146Nd and 148Nd, with 142Nd being the most abundant (27.2% of the natural abundance), and two radioisotopes, 144Nd and 150Nd. In all, 31 radioisotopes of neodymium have been detected , with the most stable radioisotopes being the naturally occurring ones: 144Nd (alpha decay with a half-life ("t"1/2) of 2.29×1015 years) and 150Nd (double beta decay, "t"1/2 = 7×1018 years, approximately). All of the remaining radioactive isotopes have half-lives that are shorter than eleven days, and the majority of these have half-lives that are shorter than 70 seconds. Neodymium also has 13 known meta states, with the most stable one being 139"m"Nd ("t"1/2 = 5.5 hours), 135"m"Nd ("t"1/2 = 5.5 minutes) and 133"m"1Nd ("t"1/2 ~70 seconds). The primary decay modes before the most abundant stable isotope, 142Nd, are electron capture and positron decay, and the primary mode after is beta minus decay. The primary decay products before 142Nd are element Pr (praseodymium) isotopes and the primary products after are element Pm (promethium) isotopes. Neodymium was discovered by Austrian chemist Carl Auer von Welsbach in Vienna in 1885. He separated neodymium, as well as the element praseodymium, from their mixture, called didymium, by means of fractional crystallization of the double ammonium nitrate tetrahydrates from nitric acid. Von Welsbach confirmed the separation by spectroscopic analysis, but the products were of relatively low purity. Didymium was discovered by Carl Gustaf Mosander in 1841, and pure neodymium was isolated from it in 1925. The name neodymium is derived from the Greek words "neos" (νέος), new, and "didymos" (διδύμος), twin. Double nitrate crystallization was the means of commercial neodymium purification until the 1950s. Lindsay Chemical Division was the first to commercialize large-scale ion-exchange purification of neodymium. Starting in the 1950s, high purity (above 99%) neodymium was primarily obtained through an ion exchange process from monazite, a mineral rich in rare-earth elements. The metal is obtained through electrolysis of its halide salts. Currently, most neodymium is extracted from bastnäsite, (Ce,La,Nd,Pr)CO3F, and purified by solvent extraction. Ion-exchange purification is reserved for preparing the highest purities (typically >99.99%). The evolving technology, and improved purity of commercially available neodymium oxide, was reflected in the appearance of neodymium glass that resides in collections today. Early neodymium glasses made in the 1930s have a more reddish or orange tinge than modern versions which are more cleanly purple, because of the difficulties in removing the last traces of praseodymium in the era when manufacturing relied upon fractional crystallization technology. Because of its role in permanent magnets used for direct-drive wind turbines, it has been argued that neodymium will be one of the main objects of geopolitical competition in a world running on renewable energy. This perspective has been criticised for failing to recognise that most wind turbines do not use permanent magnets, and for underestimating the power of economic incentives for expanded production. Neodymium is rarely found in nature as a free element, but rather it occurs in ores such as monazite and bastnäsite (these are mineral group names rather than single mineral names) that contain small amounts of all rare-earth metals. In these minerals neodymium is rarely dominant (as in the case of lanthanum), with cerium being the most abundant lanthanide; some exceptions include monazite-(Nd) and kozoite-(Nd). The main mining areas are in China, United States, Brazil, India, Sri Lanka, and Australia. The reserves of neodymium are estimated at about eight million tonnes. Although it belongs to the rare-earth metals, neodymium is not rare at all. Its abundance in the Earth's crust is about 38 mg/kg, which is the second highest among rare-earth elements, following cerium. The world's production of neodymium was about 7,000 tonnes in 2004. The bulk of current production is from China. Historically, Chinese government has imposed strategic material controls on the element, causing large fluctuations in prices. The uncertainty of pricing and availability have caused companies (particularly Japanese ones) to create permanent magnets and associated electric motors with fewer rare-earth metals; however, so far they have been unable to eliminate the need for neodymium. Neodymium is typically 10–18% of the rare-earth content of commercial deposits of the light rare-earth-element minerals bastnäsite and monazite. With neodymium compounds being the most strongly colored for the trivalent lanthanides, it can occasionally dominate the coloration of rare-earth minerals when competing chromophores are absent. It usually gives a pink coloration. Outstanding examples of this include monazite crystals from the tin deposits in Llallagua, Bolivia; ancylite from Mont Saint-Hilaire, Quebec, Canada; or lanthanite from the Saucon Valley, Pennsylvania, United States. As with neodymium glasses, such minerals change their colors under the differing lighting conditions. The absorption bands of neodymium interact with the visible emission spectrum of mercury vapor, with the unfiltered shortwave UV light causing neodymium-containing minerals to reflect a distinctive green color. This can be observed with monazite-containing sands or bastnäsite-containing ore. Neodymium magnets (actually an alloy, Nd2Fe14B) are the strongest permanent magnets known. A neodymium magnet of a few grams can lift a thousand times its own weight. These magnets are cheaper, lighter, and stronger than samarium–cobalt magnets. However, they are not superior in every aspect, as neodymium-based magnets lose their magnetism at lower temperatures and tend to corrode, while samarium–cobalt magnets do not. Neodymium magnets appear in products such as microphones, professional loudspeakers, in-ear headphones, guitar and bass guitar pick-ups, and computer hard disks where low mass, small volume, or strong magnetic fields are required. Neodymium is used in the electric motors of hybrid and electric automobiles and in the electricity generators of some designs of commercial wind turbines (only wind turbines with "permanent magnet" generators use neodymium). For example, drive electric motors of each Toyota Prius require one kilogram (2.2 pounds) of neodymium per vehicle. In 2020, physics researchers at Radboud University and Uppsala University announced they had observed a behavior known as "self-induced spin glass" in the atomic structure of neodymium. One of the researchers explained, "…we are specialists in scanning tunneling microscopy. It allows us to see the structure of individual atoms, and we can resolve the north and south poles of the atoms. With this advancement in high-precision imaging, we were able to discover the behavior in neodymium, because we could resolve the incredibly small changes in the magnetic structure." Neodymium behaves in a complex magnetic way that had not been seen before in a periodic table element. Certain transparent materials with a small concentration of neodymium ions can be used in lasers as gain media for infrared wavelengths (1054–1064 nm), e.g. (yttrium aluminium garnet), Nd:YLF (yttrium lithium fluoride), Nd:YVO4 (yttrium orthovanadate), and Nd:glass. Neodymium-doped crystals (typically Nd:YVO4) generate high-powered infrared laser beams which are converted to green laser light in commercial DPSS hand-held lasers and laser pointers. The current laser at the UK Atomic Weapons Establishment (AWE), the HELEN (High Energy Laser Embodying Neodymium) 1-terawatt neodymium-glass laser, can access the midpoints of pressure and temperature regions and is used to acquire data for modeling on how density, temperature, and pressure interact inside warheads. HELEN can create plasmas of around 106 K, from which opacity and transmission of radiation are measured. Neodymium glass solid-state lasers are used in extremely high power (terawatt scale), high energy (megajoules) multiple beam systems for inertial confinement fusion. Nd:glass lasers are usually frequency tripled to the third harmonic at 351 nm in laser fusion devices. Neodymium glass (Nd:glass) is produced by the inclusion of neodymium oxide (Nd2O3) in the glass melt. Usually in daylight or incandescent light neodymium glass appears lavender, but it appears pale blue under fluorescent lighting. Neodymium may be used to color glass in delicate shades ranging from pure violet through wine-red and warm gray. The first commercial use of purified neodymium was in glass coloration, starting with experiments by Leo Moser in November 1927. The resulting "Alexandrite" glass remains a signature color of the Moser glassworks to this day. Neodymium glass was widely emulated in the early 1930s by American glasshouses, most notably Heisey, Fostoria ("wisteria"), Cambridge ("heatherbloom"), and Steuben ("wisteria"), and elsewhere (e.g. Lalique, in France, or Murano). Tiffin's "twilight" remained in production from about 1950 to 1980. Current sources include glassmakers in the Czech Republic, the United States, and China. The sharp absorption bands of neodymium cause the glass color to change under different lighting conditions, being reddish-purple under daylight or yellow incandescent light, but blue under white fluorescent lighting, or greenish under trichromatic lighting. This color-change phenomenon is highly prized by collectors. In combination with gold or selenium, red colors are produced. Since neodymium coloration depends upon "forbidden" f-f transitions deep within the atom, there is relatively little influence on the color from the chemical environment, so the color is impervious to the thermal history of the glass. However, for the best color, iron-containing impurities need to be minimized in the silica used to make the glass. The same forbidden nature of the f-f transitions makes rare-earth colorants less intense than those provided by most d-transition elements, so more has to be used in a glass to achieve the desired color intensity. The original Moser recipe used about 5% of neodymium oxide in the glass melt, a sufficient quantity such that Moser referred to these as being "rare-earth doped" glasses. Being a strong base, that level of neodymium would have affected the melting properties of the glass, and the lime content of the glass might have had to be adjusted accordingly. Light transmitted through neodymium glasses shows unusually sharp absorption bands; the glass is used in astronomical work to produce sharp bands by which spectral lines may be calibrated. Another application is the creation of selective astronomical filters to reduce the effect of light pollution from sodium and fluorescent lighting while passing other colours, especially dark red hydrogen-alpha emission from nebulae. Neodymium is also used to remove the green color caused by iron contaminants from glass. Neodymium is a component of "didymium" (referring to mixture of salts of neodymium and praseodymium) used for coloring glass to make welder's and glass-blower's goggles; the sharp absorption bands obliterate the strong sodium emission at 589 nm. The similar absorption of the yellow mercury emission line at 578 nm is the principal cause of the blue color observed for neodymium glass under traditional white-fluorescent lighting. Neodymium and didymium glass are used in color-enhancing filters in indoor photography, particularly in filtering out the yellow hues from incandescent lighting. Similarly, neodymium glass is becoming widely used more directly in incandescent light bulbs. These lamps contain neodymium in the glass to filter out yellow light, resulting in a whiter light which is more like sunlight. Similar to its use in glasses, neodymium salts are used as a colorant for enamels. Neodymium metal dust is combustible and therefore an explosion hazard. Neodymium compounds, as with all rare-earth metals, are of low to moderate toxicity; however, its toxicity has not been thoroughly investigated. Neodymium dust and salts are very irritating to the eyes and mucous membranes, and moderately irritating to skin. Breathing the dust can cause lung embolisms, and accumulated exposure damages the liver. Neodymium also acts as an anticoagulant, especially when given intravenously. Neodymium magnets have been tested for medical uses such as magnetic braces and bone repair, but biocompatibility issues have prevented widespread application. Commercially available magnets made from neodymium are exceptionally strong and can attract each other from large distances. If not handled carefully, they come together very quickly and forcefully, causing injuries. For example, there is at least one documented case of a person losing a fingertip when two magnets he was using snapped together from 50 cm away. Another risk of these powerful magnets is that if more than one magnet is ingested, they can pinch soft tissues in the gastrointestinal tract. This has led to at least 1,700 emergency room visits and necessitated the recall of the Buckyballs line of toys, which were construction sets of small neodymium magnets.
https://en.wikipedia.org/wiki?curid=21276
Neptunium Neptunium is a chemical element with the symbol Np and atomic number 93. A radioactive actinide metal, neptunium is the first transuranic element. Its position in the periodic table just after uranium, named after the planet Uranus, led to it being named after Neptune, the next planet beyond Uranus. A neptunium atom has 93 protons and 93 electrons, of which seven are valence electrons. Neptunium metal is silvery and tarnishes when exposed to air. The element occurs in three allotropic forms and it normally exhibits five oxidation states, ranging from +3 to +7. It is radioactive, poisonous, pyrophoric, and capable of accumulating in bones, which makes the handling of neptunium dangerous. Although many false claims of its discovery were made over the years, the element was first synthesized by Edwin McMillan and Philip H. Abelson at the Berkeley Radiation Laboratory in 1940. Since then, most neptunium has been and still is produced by neutron irradiation of uranium in nuclear reactors. The vast majority is generated as a by-product in conventional nuclear power reactors. While neptunium itself has no commercial uses at present, it is used as a precursor for the formation of plutonium-238, used in radioisotope thermal generators to provide electricity for spacecraft. Neptunium has also been used in detectors of high-energy neutrons. The longest-lived isotope of neptunium, neptunium-237, is a by-product of nuclear reactors and plutonium production. It, and the isotope neptunium-239, are also found in trace amounts in uranium ores due to neutron capture reactions and beta decay. Neptunium is a hard, silvery, ductile, radioactive actinide metal. In the periodic table, it is located to the right of the actinide uranium, to the left of the actinide plutonium and below the lanthanide promethium. Neptunium is a hard metal, having a bulk modulus of 118 GPa, comparable to that of manganese. Neptunium metal is similar to uranium in terms of physical workability. When exposed to air at normal temperatures, it forms a thin oxide layer. This reaction proceeds more rapidly as the temperature increases. Neptunium has been determined to melt at 639±3 °C: this low melting point, a property the metal shares with the neighboring element plutonium (which has melting point 639.4 °C), is due to the hybridization of the 5f and 6d orbitals and the formation of directional bonds in the metal. The boiling point of neptunium is not empirically known and the usually given value of 4174 °C is extrapolated from the vapor pressure of the element. If accurate, this would give neptunium the largest liquid range of any element (3535 K passes between its melting and boiling points). Neptunium is found in at least three allotropes. Some claims of a fourth allotrope have been made, but they are so far not proven. This multiplicity of allotropes is common among the actinides. The crystal structures of neptunium, protactinium, uranium, and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d transition metals. α-neptunium takes on an orthorhombic structure, resembling a highly distorted body-centered cubic structure. Each neptunium atom is coordinated to four others and the Np–Np bond lengths are 260 pm. It is the densest of all the actinides and the fifth-densest of all naturally occurring elements, behind only rhenium, platinum, iridium, and osmium. α-neptunium has semimetallic properties, such as strong covalent bonding and a high electrical resistivity, and its metallic physical properties are closer to those of the metalloids than the true metals. Some allotropes of the other actinides also exhibit similar behaviour, though to a lesser degree. The densities of different isotopes of neptunium in the alpha phase are expected to be observably different: α-235Np should have density 20.303 g/cm3; α-236Np, density 20.389 g/cm3; α-237Np, density 20.476 g/cm3. β-neptunium takes on a distorted tetragonal close-packed structure. Four atoms of neptunium make up a unit cell, and the Np–Np bond lengths are 276 pm. γ-neptunium has a body-centered cubic structure and has Np–Np bond length of 297 pm. The γ form becomes less stable with increased pressure, though the melting point of neptunium also increases with pressure. The β-Np/γ-Np/liquid triple point occurs at 725 °C and 3200 MPa. Due to the presence of valence 5f electrons, neptunium and its alloys exhibit very interesting magnetic behavior, like many other actinides. These can range from the itinerant band-like character characteristic of the transition metals to the local moment behavior typical of scandium, yttrium, and the lanthanides. This stems from 5f-orbital hybridization with the orbitals of the metal ligands, and the fact that the 5f orbital is relativistically destabilized and extends outwards. For example, pure neptunium is paramagnetic, NpAl3 is ferromagnetic, NpGe3 has no magnetic ordering, and NpSn3 behaves fermionically. Investigations are underway regarding alloys of neptunium with uranium, americium, plutonium, zirconium, and iron, so as to recycle long-lived waste isotopes such as neptunium-237 into shorter-lived isotopes more useful as nuclear fuel. One neptunium-based superconductor alloy has been discovered with formula NpPd5Al2. This occurrence in neptunium compounds is somewhat surprising because they often exhibit strong magnetism, which usually destroys superconductivity. The alloy has a tetragonal structure with a superconductivity transition temperature of −268.3 °C (4.9 K). Neptunium has five ionic oxidation states ranging from +3 to +7 when forming chemical compounds, which can be simultaneously observed in solutions. It is the heaviest actinide that can lose all its valence electrons in a stable compound. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds. A neptunium atom has 93 electrons, arranged in the configuration [Rn]5f46d17s2. This differs from the configuration expected by the Aufbau principle in that one electron is in the 6d subshell instead of being as expected in the 5f subshell. This is because of the similarity of the electron energies of the 5f, 6d, and 7s subshells. In forming compounds and ions, all the valence electrons may be lost, leaving behind an inert core of inner electrons with the electron configuration of the noble gas radon; more commonly, only some of the valence electrons will be lost. The electron configuration for the tripositive ion Np3+ is [Rn] 5f4, with the outermost 7s and 6d electrons lost first: this is exactly analogous to neptunium's lanthanide homolog promethium, and conforms to the trend set by the other actinides with their [Rn] 5f"n" electron configurations in the tripositive state. The first ionization potential of neptunium was measured to be at most in 1974, based on the assumption that the 7s electrons would ionize before 5f and 6d; more recent measurements have refined this to 6.2657 eV. 24 neptunium radioisotopes have been characterized with the most stable being 237Np with a half-life of 2.14 million years, 236Np with a half-life of 154,000 years, and 235Np with a half-life of 396.1 days. All of the remaining radioactive isotopes have half-lives that are less than 4.5 days, and the majority of these have half-lives that are less than 50 minutes. This element also has at least four meta states, with the most stable being 236mNp with a half-life of 22.5 hours. The isotopes of neptunium range in atomic weight from 219.032 u (219Np) to 244.068 u (244Np), though 221Np and 222Np have not yet been reported. Most of the isotopes that are lighter than the most stable one, 237Np, decay primarily by electron capture although a sizable number, most notably 229Np and 230Np, also exhibit various levels of decay via alpha emission to become protactinium. 237Np itself, being the beta-stable isobar of mass number 237, decays almost exclusively by alpha emission into 233Pa, with very rare (occurring only about once in trillions of decays) spontaneous fission and cluster decay (emission of 30Mg to form 207Tl). All of the known isotopes except one that are heavier than this decay exclusively via beta emission. The lone exception, 240mNp, exhibits a rare (>0.12%) decay by isomeric transition in addition to the beta emission. 237Np eventually decays to form bismuth-209 and thallium-205, unlike most other common heavy nuclei which decay into isotopes of lead. This decay chain is known as the neptunium series. This decay chain had long been extinct on Earth due to the short half-lives of all of its isotopes above bismuth-209, but is now being resurrected thanks to artificial production of neptunium on the tonne scale. The isotopes neptunium-235, -236, and -237 are predicted to be fissile; only neptunium-237's fissionability has been experimentally shown, with the critical mass being about 60 kg, only about 10 kg more than that of the commonly used uranium-235. Calculated values of the critical masses of neptunium-235, -236, and -237 respectively are 66.2 kg, 6.79 kg, and 63.6 kg: the neptunium-236 value is even lower than that of plutonium-239. In particular 236Np also has a low neutron cross section. Despite this, a neptunium atomic bomb has never been built: uranium and plutonium have lower critical masses than 235Np and 237Np, and 236Np is difficult to purify as it is not found in quantity in spent nuclear fuel and is nearly impossible to separate in any significant quantities from its parent 237Np. Since all isotopes of neptunium have half-lives that are many times shorter than the age of the Earth, any primordial neptunium should have decayed by now. After only about 80 million years, the concentration of even the longest lived isotope, 237Np, would have been reduced to less than one-trillionth (10−12) of its original amount; and even if the whole Earth had initially been made of pure 237Np (and ignoring that this would be well over its critical mass of 60 kg), 2100 half-lives would have passed since the formation of the Solar System, and thus all of it would have decayed. Thus neptunium is present in nature only in negligible amounts produced as intermediate decay products of other isotopes. Trace amounts of the neptunium isotopes neptunium-237 and -239 are found naturally as decay products from transmutation reactions in uranium ores. In particular, 239Np and 237Np are the most common of these isotopes; they are directly formed from neutron capture by uranium-238 atoms. These neutrons come from the spontaneous fission of uranium-238, naturally neutron-induced fission of uranium-235, cosmic ray spallation of nuclei, and light elements absorbing alpha particles and emitting a neutron. The half-life of 239Np is very short, although the detection of its much longer-lived daughter 239Pu in nature in 1951 definitively established its natural occurrence. In 1952, 237Np was identified and isolated from concentrates of uranium ore from the Belgian Congo: in these minerals, the ratio of neptunium-237 to uranium is less than or equal to about 10−12 to 1. Most neptunium (and plutonium) now encountered in the environment is due to atmospheric nuclear explosions that took place between the detonation of the first atomic bomb in 1945 and the ratification of the Partial Nuclear Test Ban Treaty in 1963. The total amount of neptunium released by these explosions and the few atmospheric tests that have been carried out since 1963 is estimated to be around 2500 kg. The overwhelming majority of this is composed of the long-lived isotopes 236Np and 237Np since even the moderately long-lived 235Np (half-life 396 days) would have decayed to less than one-billionth (10−9) its original concentration over the intervening decades. An additional very small amount of neptunium, created by neutron irradiation of natural uranium in nuclear reactor cooling water, is released when the water is discharged into rivers or lakes. The concentration of 237Np in seawater is approximately 6.5 × 10−5 millibecquerels per liter: this concentration is between 0.1% and 1% that of plutonium. Once in the environment, neptunium generally oxidizes fairly quickly, usually to the +4 or +5 state. Regardless of its oxidation state, the element exhibits a much greater mobility than the other actinides, largely due to its ability to readily form aqueous solutions with various other elements. In one study comparing the diffusion rates of neptunium(V), plutonium(IV), and americium(III) in sandstone and limestone, neptunium penetrated more than ten times as well as the other elements. Np(V) will also react efficiently in pH levels greater than 5.5 if there are no carbonates present and in these conditions it has also been observed to readily bond with quartz. It has also been observed to bond well with goethite, ferric oxide colloids, and several clays including kaolinite and smectite. Np(V) does not bond as readily to soil particles in mildly acidic conditions as its fellow actinides americium and curium by nearly an order of magnitude. This behavior enables it to migrate rapidly through the soil while in solution without becoming fixed in place, contributing further to its mobility. Np(V) is also readily absorbed by concrete, which because of the element's radioactivity is a consideration that must be addressed when building nuclear waste storage facilities. When absorbed in concrete, it is reduced to Np(IV) in a relatively short period of time. Np(V) is also reduced by humic acid if it is present on the surface of goethite, hematite, and magnetite. Np(IV) is absorbed efficiently by tuff, granodiorite, and bentonite; although uptake by the latter is most pronounced in mildly acidic conditions. It also exhibits a strong tendency to bind to colloidal particulates, an effect that is enhanced when in soil with a high clay content. The behavior provides an additional aid in the element's observed high mobility. When the first periodic table of the elements was published by Dmitri Mendeleev in the early 1870s, it showed a " — " in place after uranium similar to several other places for then-undiscovered elements. Other subsequent tables of known elements, including a 1913 publication of the known radioactive isotopes by Kasimir Fajans, also show an empty place after uranium, element 92. Up to and after the discovery of the final component of the atomic nucleus, the neutron in 1932, most scientists did not seriously consider the possibility of elements heavier than uranium. While nuclear theory at the time did not explicitly prohibit their existence, there was little evidence to suggest that they did. However, the discovery of induced radioactivity by Irène and Frédéric Joliot-Curie in late 1933 opened up an entirely new method of researching the elements and inspired a small group of Italian scientists led by Enrico Fermi to begin a series of experiments involving neutron bombardment. Although the Joliot-Curies' experiment involved bombarding a sample of 27Al with alpha particles to produce the radioactive 30P, Fermi realized that using neutrons, which have no electrical charge, would most likely produce even better results than the positively charged alpha particles. Accordingly, in March 1934 he began systematically subjecting all of the then-known elements to neutron bombardment to determine whether others could also be induced to radioactivity. After several months of work, Fermi's group had tentatively determined that lighter elements would disperse the energy of the captured neutron by emitting a proton or alpha particle and heavier elements would generally accomplish the same by emitting a gamma ray. This latter behavior would later result in the beta decay of a neutron into a proton, thus moving the resulting isotope one place up the periodic table. When Fermi's team bombarded uranium, they observed this behavior as well, which strongly suggested that the resulting isotope had an atomic number of 93. Fermi was initially reluctant to publicize such a claim, but after his team observed several unknown half-lives in the uranium bombardment products that did not match those of any known isotope, he published a paper entitled "Possible Production of Elements of Atomic Number Higher than 92" in June 1934. In it he proposed the name ausonium (atomic symbol Ao) for element 93, after the Greek name "Ausonia" (Italy). Several theoretical objections to the claims of Fermi's paper were quickly raised; in particular, the exact process that took place when an atom captured a neutron was not well understood at the time. This and Fermi's accidental discovery three months later that nuclear reactions could be induced by slow neutrons cast further doubt in the minds of many scientists, notably Aristid von Grosse and Ida Noddack, that the experiment was creating element 93. While von Grosse's claim that Fermi was actually producing protactinium (element 91) was quickly tested and disproved, Noddack's proposal that the uranium had been shattered into two or more much smaller fragments was simply ignored by most because existing nuclear theory did not include a way for this to be possible. Fermi and his team maintained that they were in fact synthesizing a new element, but the issue remained unresolved for several years. Although the many different and unknown radioactive half-lives in the experiment's results showed that several nuclear reactions were occurring, Fermi's group could not prove that element 93 was being created unless they could isolate it chemically. They and many other scientists attempted to accomplish this, including Otto Hahn and Lise Meitner who were among the best radiochemists in the world at the time and supporters of Fermi's claim, but they all failed. Much later, it was determined that the main reason for this failure was because the predictions of element 93's chemical properties were based on a periodic table which lacked the actinide series. This arrangement placed protactinium below tantalum, uranium below tungsten, and further suggested that element 93, at that point referred to as eka-rhenium, should be similar to the group 7 elements, including manganese and rhenium. Thorium, protactinium, and uranium, with their dominant oxidation states of +4, +5, and +6 respectively, fooled scientists into thinking they belonged below hafnium, tantalum, and tungsten, rather than below the lanthanide series, which was at the time viewed as a fluke, and whose members all have dominant +3 states; neptunium, on the other hand, has a much rarer, more unstable +7 state, with +4 and +5 being the most stable. Upon finding that plutonium and the other transuranic elements also have dominant +3 and +4 states, along with the discovery of the f-block, the actinide series was firmly established. While the question of whether Fermi's experiment had produced element 93 was stalemated, two additional claims of the discovery of the element appeared, although unlike Fermi, they both claimed to have observed it in nature. The first of these claims was by Czech engineer Odolen Koblic in 1934 when he extracted a small amount of material from the wash water of heated pitchblende. He proposed the name bohemium for the element, but after being analyzed it turned out that the sample was a mixture of tungsten and vanadium. The other claim, in 1938 by Romanian physicist Horia Hulubei and French chemist Yvette Cauchois, claimed to have discovered the new element via spectroscopy in minerals. They named their element sequanium, but the claim was discounted because the prevailing theory at the time was that if it existed at all, element 93 would not exist naturally. However, as neptunium does in fact occur in nature in trace amounts, as demonstrated when it was found in uranium ore in 1952, it is possible that Hulubei and Cauchois did in fact observe neptunium. Although by 1938 some scientists, including Niels Bohr, were still reluctant to accept that Fermi had actually produced a new element, he was nevertheless awarded the Nobel Prize in Physics in November 1938 ""for his demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons"". A month later, the almost totally unexpected discovery of nuclear fission by Hahn, Meitner, and Otto Frisch put an end to the possibility that Fermi had discovered element 93 because most of the unknown half-lives that had been observed by Fermi's team were rapidly identified as those of fission products. Perhaps the closest of all attempts to produce the missing element 93 was that conducted by the Japanese physicist Yoshio Nishina working with chemist Kenjiro Kimura in 1940, just before the outbreak of the Pacific War in 1941: they bombarded 238U with fast neutrons. However, while slow neutrons tend to induce neutron capture through a (n, γ) reaction, fast neutrons tend to induce a "knock-out" (n, 2n) reaction, where one neutron is added and two more are removed, resulting in the net loss of a neutron. Nishina and Kimura, having tested this technique on 232Th and successfully produced the known 231Th and its long-lived beta decay daughter 231Pa (both occurring in the natural decay chain of 235U), therefore correctly assigned the new 6.75-day half-life activity they observed to the new isotope 237U. They confirmed that this isotope was also a beta emitter and must hence decay to the unknown nuclide 23793. They attempted to isolate this nuclide by carrying it with its supposed lighter congener rhenium, but no beta or alpha decay was observed from the rhenium-containing fraction: Nishina and Kimura thus correctly speculated that the half-life of 23793, like that of 231Pa, was very long and hence its activity would be so weak as to be unmeasurable by their equipment, thus concluding the last and closest unsuccessful search for transuranic elements. As research on nuclear fission progressed in early 1939, Edwin McMillan at the Berkeley Radiation Laboratory of the University of California, Berkeley decided to run an experiment bombarding uranium using the powerful 60-inch (1.52 m) cyclotron that had recently been built at the university. The purpose was to separate the various fission products produced by the bombardment by exploiting the enormous force that the fragments gain from their mutual electrical repulsion after fissioning. Although he did not discover anything of note from this, McMillan did observe two new beta decay half-lives in the uranium trioxide target itself, which meant that whatever was producing the radioactivity had not violently repelled each other like normal fission products. He quickly realized that one of the half-lives closely matched the known 23-minute decay period of uranium-239, but the other half-life of 2.3 days was unknown. McMillan took the results of his experiment to chemist and fellow Berkeley professor Emilio Segrè to attempt to isolate the source of the radioactivity. Both scientists began their work using the prevailing theory that element 93 would have similar chemistry to rhenium, but Segrè rapidly determined that McMillan's sample was not at all similar to rhenium. Instead, when he reacted it with hydrogen fluoride (HF) with a strong oxidizing agent present, it behaved much like members of the rare earths. Since these elements comprise a large percentage of fission products, Segrè and McMillan decided that the half-life must have been simply another fission product, titling the paper "An Unsuccessful Search for Transuranium Elements". However, as more information about fission became available, the possibility that the fragments of nuclear fission could still have been present in the target became more remote. McMillan and several scientists, including Philip H. Abelson, attempted again to determine what was producing the unknown half-life. In early 1940, McMillan realized that his 1939 experiment with Segrè had failed to test the chemical reactions of the radioactive source with sufficient rigor. In a new experiment, McMillan tried subjecting the unknown substance to HF in the presence of a reducing agent, something he had not done before. This reaction resulted in the sample precipitating with the HF, an action that definitively ruled out the possibility that the unknown substance was a rare earth. Shortly after this, Abelson, who had received his graduate degree from the university, visited Berkeley for a short vacation and McMillan asked the more able chemist to assist with the separation of the experiment's results. Abelson very quickly observed that whatever was producing the 2.3-day half-life did not have chemistry like any known element and was actually more similar to uranium than a rare earth. This discovery finally allowed the source to be isolated and later, in 1945, led to the classification of the actinide series. As a final step, McMillan and Abelson prepared a much larger sample of bombarded uranium that had a prominent 23-minute half-life from 239U and demonstrated conclusively that the unknown 2.3-day half-life increased in strength in concert with a decrease in the 23-minute activity through the following reaction: This proved that the unknown radioactive source originated from the decay of uranium and, coupled with the previous observation that the source was different chemically from all known elements, proved beyond all doubt that a new element had been discovered. McMillan and Abelson published their results in a paper entitled "Radioactive Element 93" in the "Physical Review" on May 27, 1940. They did not propose a name for the element in the paper, but they soon decided on the name "neptunium" since Neptune is the next planet beyond Uranus in our solar system. McMillan and Abelson's success compared to Nishina and Kimura's near miss can be attributed to the favorable half-life of 239Np for radiochemical analysis and quick decay of 239U, in contrast to the slower decay of 237U and extremely long half-life of 237Np. It was also realized that the beta decay of 239Np must produce an isotope of element 94 (now called plutonium), but the quantities involved in McMillan and Abelson's original experiment were too small to isolate and identify plutonium along with neptunium. The discovery of plutonium had to wait until the end of 1940, when Glenn T. Seaborg and his team identified the isotope plutonium-238. Neptunium's unique radioactive characteristics allowed it to be traced as it moved through various compounds in chemical reactions, at first this was the only method available to prove that its chemistry was different from other elements. As the first isotope of neptunium to be discovered has such a short half-life, McMillan and Abelson were unable to prepare a sample that was large enough to perform chemical analysis of the new element using the technology that was then available. However, after the discovery of the long-lived 237Np isotope in 1942 by Glenn Seaborg and Arthur Wahl, forming weighable amounts of neptunium became a realistic endeavor. Its half-life was initially determined to be about 3 million years (later revised to 2.144 million years), confirming the predictions of Nishina and Kimura of a very long half-life. Early research into the element was somewhat limited because most of the nuclear physicists and chemists in the United States at the time were focused on the massive effort to research the properties of plutonium as part of the Manhattan Project. Research into the element did continue as a minor part of the project and the first bulk sample of neptunium was isolated in 1944. Much of the research into the properties of neptunium since then has been focused on understanding how to confine it as a portion of nuclear waste. Because it has isotopes with very long half-lives, it is of particular concern in the context of designing confinement facilities that can last for thousands of years. It has found some limited uses as a radioactive tracer and a precursor for various nuclear reactions to produce useful plutonium isotopes. However, most of the neptunium that is produced as a reaction byproduct in nuclear power stations is considered to be a waste product. The vast majority of the neptunium that currently exists on Earth was produced artificially in nuclear reactions. Neptunium-237 is the most commonly synthesized isotope due to it being the only one that both can be created via neutron capture and also has a half-life long enough to allow weighable quantities to be easily isolated. As such, it is by far the most common isotope to be utilized in chemical studies of the element. Heavier isotopes of neptunium decay quickly, and lighter isotopes of neptunium cannot be produced by neutron capture, so chemical separation of neptunium from cooled spent nuclear fuel gives nearly pure 237Np. The short-lived heavier isotopes 238Np and 239Np, useful as radioactive tracers, are produced through neutron irradiation of 237Np and 238U respectively, while the longer-lived lighter isotopes 235Np and 236Np are produced through irradiation of 235U with protons and deuterons in a cyclotron. Artificial 237Np metal is usually isolated through a reaction of 237NpF3 with liquid barium or lithium at around 1200 °C and is most often extracted from spent nuclear fuel rods in kilogram amounts as a by-product in plutonium production. By weight, neptunium-237 discharges are about 5% as great as plutonium discharges and about 0.05% of spent nuclear fuel discharges. However, even this fraction still amounts to more than fifty tons per year globally. Recovering uranium and plutonium from spent nuclear fuel for reuse is one of the major processes of the nuclear fuel cycle. As it has a long half-life of just over 2 million years, the alpha emitter 237Np is one of the major isotopes of the minor actinides separated from spent nuclear fuel. Many separation methods have been used to separate out the neptunium, operating on small and large scales. The small-scale purification operations have the goals of preparing pure neptunium as a precursor of metallic neptunium and its compounds, and also to isolate and preconcentrate neptunium in samples for analysis. Most methods that separate neptunium ions exploit the differing chemical behaviour of the differing oxidation states of neptunium (from +3 to +6 or sometimes even +7) in solution. Among the methods that are or have been used are: solvent extraction (using various extractants, usually multidentate β-diketone derivatives, organophosphorus compounds, and amine compounds), chromatography using various ion-exchange or chelating resins, coprecipitation (possible matrices include LaF3, BiPO4, BaSO4, Fe(OH)3, and MnO2), electrodeposition, and biotechnological methods. Currently, commercial reprocessing plants use the Purex process, involving the solvent extraction of uranium and plutonium with tributyl phosphate. When it is in an aqueous solution, neptunium can exist in any of its five possible oxidation states (+3 to +7) and each of these show a characteristic color. The stability of each oxidation state is strongly dependent on various factors, such as the presence of oxidizing or reducing agents, pH of the solution, presence of coordination complex-forming ligands, and even the concentration of neptunium in the solution. In acidic solutions, the neptunium(III) to neptunium(VII) ions exist as Np3+, Np4+, , , and . In basic solutions, they exist as the oxides and hydroxides Np(OH)3, NpO2, NpO2OH, NpO2(OH)2, and . Not as much work has been done to characterize neptunium in basic solutions. Np3+ and Np4+ can easily be reduced and oxidized to each other, as can and . Np(III) or Np3+ exists as hydrated complexes in acidic solutions, . It is a dark blue-purple and is analogous to its lighter congener, the pink rare-earth ion Pm3+. In the presence of oxygen, it is quickly oxidized to Np(IV) unless strong reducing agents are also present. Nevertheless, it is the second-least easily hydrolyzed neptunium ion in water, forming the NpOH2+ ion. Np3+ is the predominant neptunium ion in solutions of pH 4–5. Np(IV) or Np4+ is pale yellow-green in acidic solutions, where it exists as hydrated complexes (). It is quite unstable to hydrolysis in acidic aqueous solutions at pH 1 and above, forming NpOH3+. In basic solutions, Np4+ tends to hydrolyze to form the neutral neptunium(IV) hydroxide (Np(OH)4) and neptunium(IV) oxide (NpO2). Np(V) or is green-blue in aqueous solution, in which it behaves as a strong Lewis acid. It is a stable ion and is the most common form of neptunium in aqueous solutions. Unlike its neighboring homologues and , does not spontaneously disproportionate except at very low pH and high concentration: It hydrolyzes in basic solutions to form NpO2OH and . Np(VI) or , the neptunyl ion, shows a light pink or reddish color in an acidic solution and yellow-green otherwise. It is a strong Lewis acid and is the main neptunium ion encountered in solutions of pH 3–4. Though stable in acidic solutions, it is quite easily reduced to the Np(V) ion, and it is not as stable as the homologous hexavalent ions of its neighbours uranium and plutonium (the uranyl and plutonyl ions). It hydrolyzes in basic solutions to form the oxo and hydroxo ions NpO2OH+, , and . Np(VII) is dark green in a strongly basic solution. Though its chemical formula in basic solution is frequently cited as , this is a simplification and the real structure is probably closer to a hydroxo species like . Np(VII) was first prepared in basic solution in 1967. In strongly acidic solution, Np(VII) is found as ; water quickly reduces this to Np(VI). Its hydrolysis products are uncharacterized. The oxides and hydroxides of neptunium are closely related to its ions. In general, Np hydroxides at various oxidation levels are less stable than the actinides before it on the periodic table such as thorium and uranium and more stable than those after it such as plutonium and americium. This phenomenon is because the stability of an ion increases as the ratio of atomic number to the radius of the ion increases. Thus actinides higher on the periodic table will more readily undergo hydrolysis. Neptunium(III) hydroxide is quite stable in acidic solutions and in environments that lack oxygen, but it will rapidly oxidize to the IV state in the presence of air. It is not soluble in water. Np(IV) hydroxides exist mainly as the electrically neutral Np(OH)4 and its mild solubility in water is not affected at all by the pH of the solution. This suggests that the other Np(IV) hydroxide, , does not have a significant presence. Because the Np(V) ion is very stable, it can only form a hydroxide in high acidity levels. When placed in a 0.1 M sodium perchlorate solution, it does not react significantly for a period of months, although a higher molar concentration of 3.0 M will result in it reacting to the solid hydroxide NpO2OH almost immediately. Np(VI) hydroxide is more reactive but it is still fairly stable in acidic solutions. It will form the compound NpO3· H2O in the presence of ozone under various carbon dioxide pressures. Np(VII) has not been well-studied and no neutral hydroxides have been reported. It probably exists mostly as . Three anhydrous neptunium oxides have been reported, NpO2, Np2O5, and Np5O8, though some studies have stated that only the first two of these exist, suggesting that claims of Np5O8 are actually the result of mistaken analysis of Np2O5. However, as the full extent of the reactions that occur between neptunium and oxygen has yet to be researched, it is not certain which of these claims is accurate. Although neptunium oxides have not been produced with neptunium in oxidation states as high as those possible with the adjacent actinide uranium, neptunium oxides are more stable at lower oxidation states. This behavior is illustrated by the fact that NpO2 can be produced by simply burning neptunium salts of oxyacids in air. The greenish-brown NpO2 is very stable over a large range of pressures and temperatures and does not undergo phase transitions at low temperatures. It does show a phase transition from face-centered cubic to orthorhombic at around 33-37GPa, although it returns to is original phase when pressure is released. It remains stable under oxygen pressures up to 2.84 MPa and temperatures up to 400 °C. Np2O5 is black-brown in color and monoclinic with a lattice size of 418×658×409 picometres. It is relatively unstable and decomposes to NpO2 and O2 at 420-695 °C. Although Np2O5 was initially subject to several studies that claimed to produce it with mutually contradictory methods, it was eventually prepared successfully by heating neptunium peroxide to 300-350 °C for 2–3 hours or by heating it under a layer of water in an ampoule at 180 °C. Neptunium also forms a large number of oxide compounds with a wide variety of elements, although the neptunate oxides formed with alkali metals and alkaline earth metals have been by far the most studied. Ternary neptunium oxides are generally formed by reacting NpO2 with the oxide of another element or by precipitating from an alkaline solution. Li5NpO6 has been prepared by reacting Li2O and NpO2 at 400 °C for 16 hours or by reacting Li2O2 with NpO3 · H2O at 400 °C for 16 hours in a quartz tube and flowing oxygen. Alkali neptunate compounds K3NpO5, Cs3NpO5, and Rb3NpO5 are all created by a similar reaction: The oxide compounds KNpO4, CsNpO4, and RbNpO4 are formed by reacting Np(VII) () with a compound of the alkali metal nitrate and ozone. Additional compounds have been produced by reacting NpO3 and water with solid alkali and alkaline peroxides at temperatures of 400 - 600 °C for 15–30 hours. Some of these include Ba3(NpO5)2, Ba2NaNpO6, and Ba2LiNpO6. Also, a considerable number of hexavelant neptunium oxides are formed by reacting solid-state NpO2 with various alkali or alkaline earth oxides in an environment of flowing oxygen. Many of the resulting compounds also have an equivalent compound that substitutes uranium for neptunium. Some compounds that have been characterized include Na2Np2O7, Na4NpO5, Na6NpO6, and Na2NpO4. These can be obtained by heating different combinations of NpO2 and Na2O to various temperature thresholds and further heating will also cause these compounds to exhibit different neptunium allotropes. The lithium neptunate oxides Li6NpO6 and Li4NpO5 can be obtained with similar reactions of NpO2 and Li2O. A large number of additional alkali and alkaline neptunium oxide compounds such as Cs4Np5O17 and Cs2Np3O10 have been characterized with various production methods. Neptunium has also been observed to form ternary oxides with many additional elements in groups 3 through 7, although these compounds are much less well studied. Although neptunium halide compounds have not been nearly as well studied as its oxides, a fairly large number have been successfully characterized. Of these, neptunium fluorides have been the most extensively researched, largely because of their potential use in separating the element from nuclear waste products. Four binary neptunium fluoride compounds, NpF3, NpF4, NpF5, and NpF6, have been reported. The first two are fairly stable and were first prepared in 1947 through the following reactions: Later, NpF4 was obtained directly by heating NpO2 to various temperatures in mixtures of either hydrogen fluoride or pure fluorine gas. NpF5 is much more difficult to create and most known preparation methods involve reacting NpF4 or NpF6 compounds with various other fluoride compounds. NpF5 will decompose into NpF4 and NpF6 when heated to around 320 °C. NpF6 or neptunium hexafluoride is extremely volatile, as are its adjacent actinide compounds uranium hexafluoride (UF6) and plutonium hexafluoride (PuF6). This volatility has attracted a large amount of interest to the compound in an attempt to devise a simple method for extracting neptunium from spent nuclear power station fuel rods. NpF6 was first prepared in 1943 by reacting NpF3 and gaseous fluorine at very high temperatures and the first bulk quantities were obtained in 1958 by heating NpF4 and dripping pure fluorine on it in a specially prepared apparatus. Additional methods that have successfully produced neptunium hexafluoride include reacting BrF3 and BrF5 with NpF4 and by reacting several different neptunium oxide and fluoride compounds with anhydrous hydrogen fluorides. Four neptunium oxyfluoride compounds, NpO2F, NpOF3, NpO2F2, and NpOF4, have been reported, although none of them have been extensively studied. NpO2F2 is a pinkish solid and can be prepared by reacting NpO3 · H2O and Np2F5 with pure fluorine at around 330 °C. NpOF3 and NpOF4 can be produced by reacting neptunium oxides with anhydrous hydrogen fluoride at various temperatures. Neptunium also forms a wide variety of fluoride compounds with various elements. Some of these that have been characterized include CsNpF6, Rb2NpF7, Na3NpF8, and K3NpO2F5. Two neptunium chlorides, NpCl3 and NpCl4, have been characterized. Although several attempts to create NpCl5 have been made, they have not been successful. NpCl3 is created by reducing neptunium dioxide with hydrogen and carbon tetrachloride (CCl4) and NpCl4 by reacting a neptunium oxide with CCl4 at around 500 °C. Other neptunium chloride compounds have also been reported, including NpOCl2, Cs2NpCl6, Cs3NpO2Cl4, and Cs2NaNpCl6. Neptunium bromides NpBr3 and NpBr4 have also been created; the latter by reacting aluminium bromide with NpO2 at 350 °C and the former in an almost identical procedure but with zinc present. The neptunium iodide NpI3 has also been prepared by the same method as NpBr3. Neptunium chalcogen and pnictogen compounds have been well studied primarily as part of research into their electronic and magnetic properties and their interactions in the natural environment. Pnictide and carbide compounds have also attracted interest because of their presence in the fuel of several advanced nuclear reactor designs, although the latter group has not had nearly as much research as the former. A wide variety of neptunium sulfide compounds have been characterized, including the pure sulfide compounds NpS, NpS3, Np2S5, Np3S5, Np2S3, and Np3S4. Of these, Np2S3, prepared by reacting NpO2 with hydrogen sulfide and carbon disulfide at around 1000 °C, is the most well-studied and three allotropic forms are known. The α form exists up to around 1230 °C, the β up to 1530 °C, and the γ form, which can also exist as Np3S4, at higher temperatures. NpS can be created by reacting Np2S3 and neptunium metal at 1600 °C and Np3S5 can be prepared by the decomposition of Np2S3 at 500 °C or by reacting sulfur and neptunium hydride at 650 °C. Np2S5 is made by heating a mixture of Np3S5 and pure sulfur to 500 °C. All of the neptunium sulfides except for the β and γ forms of Np2S3 are isostructural with the equivalent uranium sulfide and several, including NpS, α−Np2S3, and β−Np2S3 are also isostructural with the equivalent plutonium sulfide. The oxysulfides NpOS, Np4O4S, and Np2O2S have also been created, although the latter three have not been well studied. NpOS was first prepared in 1985 by vacuum sealing NpO2, Np3S5, and pure sulfur in a quartz tube and heating it to 900 °C for one week. Neptunium selenide compounds that have been reported include NpSe, NpSe3, Np2Se3, Np2Se5, Np3Se4, and Np3Se5. All of these have only been obtained by heating neptunium hydride and selenium metal to various temperatures in a vacuum for an extended period of time and Np2Se3 is only known to exist in the γ allotrope at relatively high temperatures. Two neptunium oxyselenide compounds are known, NpOSe and Np2O2Se, are formed with similar methods by replacing the neptunium hydride with neptunium dioxide. The known neptunium telluride compounds NpTe, NpTe3, Np3Te4, Np2Te3, and Np2O2Te are formed by similar procedures to the selenides and Np2O2Te is isostructural to the equivalent uranium and plutonium compounds. No neptunium−polonium compounds have been reported. Neptunium nitride (NpN) was first prepared in 1953 by reacting neptunium hydride and ammonia gas at around 750 °C in a quartz capillary tube. Later, it was produced by reacting different mixtures of nitrogen and hydrogen with neptunium metal at various temperatures. It has also been created by the reduction of neptunium dioxide with diatomic nitrogen gas at 1550 °C. NpN is isomorphous with uranium mononitride (UN) and plutonium mononitride (PuN) and has a melting point of 2830 °C under a nitrogen pressure of around 1 MPa. Two neptunium phosphide compounds have been reported, NpP and Np3P4. The first has a face centered cubic structure and is prepared by converting neptunium metal to a powder and then reacting it with phosphine gas at 350 °C. Np3P4 can be created by reacting neptunium metal with red phosphorus at 740 °C in a vacuum and then allowing any extra phosphorus to sublimate away. The compound is non-reactive with water but will react with nitric acid to produce Np(IV) solution. Three neptunium arsenide compounds have been prepared, NpAs, NpAs2, and Np3As4. The first two were first created by heating arsenic and neptunium hydride in a vacuum-sealed tube for about a week. Later, NpAs was also made by confining neptunium metal and arsenic in a vacuum tube, separating them with a quartz membrane, and heating them to just below neptunium's melting point of 639 °C, which is slightly higher than the arsenic's sublimation point of 615 °C. Np3As4 is prepared by a similar procedure using iodine as a transporting agent. NpAs2 crystals are brownish gold and Np3As4 is black. The neptunium antimonide compound NpSb was created in 1971 by placing equal quantities of both elements in a vacuum tube, heating them to the melting point of antimony, and then heating it further to 1000 °C for sixteen days. This procedure also created trace amounts of an additional antimonide compound Np3Sb4. One neptunium-bismuth compound, NpBi, has also been reported. The neptunium carbides NpC, Np2C3, and NpC2 (tentative) have been reported, but have not characterized in detail despite the high importance and utility of actinide carbides as advanced nuclear reactor fuel. NpC is a non-stoichiometric compound, and could be better labelled as NpC"x" (0.82 ≤ "x" ≤ 0.96). It may be obtained from the reaction of neptunium hydride with graphite at 1400 °C or by heating the constituent elements together in an electric arc furnace using a tungsten electrode. It reacts with excess carbon to form pure Np2C3. NpC2 is formed from heating NpO2 in a graphite crucible at 2660–2800 °C. Neptunium reacts with hydrogen in a similar manner to its neighbor plutonium, forming the hydrides NpH2+"x" (face-centered cubic) and NpH3 (hexagonal). These are isostructural with the corresponding plutonium hydrides, although unlike PuH2+"x", the lattice parameters of NpH2+"x" become greater as the hydrogen content ("x") increases. The hydrides require extreme care in handling as they decompose in a vacuum at 300 °C to form finely divided neptunium metal, which is pyrophoric. Being chemically stable, neptunium phosphates have been investigated for potential use in immobilizing nuclear waste. Neptunium pyrophosphate (α-NpP2O7), a green solid, has been produced in the reaction between neptunium dioxide and boron phosphate at 1100 °C, though neptunium(IV) phosphate has so far remained elusive. The series of compounds NpM2(PO4)3, where M is an alkali metal (Li, Na, K, Rb, or Cs), are all known. Some neptunium sulfates have been characterized, both aqueous and solid and at various oxidation states of neptunium (IV through VI have been observed). Additionally, neptunium carbonates have been investigated to achieve a better understanding of the behavior of neptunium in geological repositories and the environment, where it may come into contact with carbonate and bicarbonate aqueous solutions and form soluble complexes. A few organoneptunium compounds are known and chemically characterized, although not as many as for uranium due to neptunium's scarcity and radioactivity. The most well known organoneptunium compounds are the cyclopentadienyl and cyclooctatetraenyl compounds and their derivatives. The trivalent cyclopentadienyl compound Np(C5H5)3·THF was obtained in 1972 from reacting Np(C5H5)3Cl with sodium, although the simpler Np(C5H5) could not be obtained. Tetravalent neptunium cyclopentadienyl, a reddish-brown complex, was synthesized in 1968 by reacting neptunium(IV) chloride with potassium cyclopentadienide: It is soluble in benzene and THF, and is less sensitive to oxygen and water than Pu(C5H5)3 and Am(C5H5)3. Other Np(IV) cyclopentadienyl compounds are known for many ligands: they have the general formula (C5H5)3NpL, where L represents a ligand. Neptunocene, Np(C8H8)2, was synthesized in 1970 by reacting neptunium(IV) chloride with K2(C8H8). It is isomorphous to uranocene and plutonocene, and they behave chemically identically: all three compounds are insensitive to water or dilute bases but are sensitive to air, reacting quickly to form oxides, and are only slightly soluble in benzene and toluene. Other known neptunium cyclooctatetraenyl derivatives include Np(RC8H7)2 (R = ethanol, butanol) and KNp(C8H8)·2THF, which is isostructural to the corresponding plutonium compound. In addition, neptunium hydrocarbyls have been prepared, and solvated triiodide complexes of neptunium are a precursor to many organoneptunium and inorganic neptunium compounds. There is much interest in the coordination chemistry of neptunium, because its five oxidation states all exhibit their own distinctive chemical behavior, and the coordination chemistry of the actinides is heavily influenced by the actinide contraction (the greater-than-expected decrease in ionic radii across the actinide series, analogous to the lanthanide contraction). Few neptunium(III) coordination compounds are known, because Np(III) is readily oxidized by atmospheric oxygen while in aqueous solution. However, sodium formaldehyde sulfoxylate can reduce Np(IV) to Np(III), stabilizing the lower oxidation state and forming various sparingly soluble Np(III) coordination complexes, such as ·11H2O, ·H2O, and . Many neptunium(IV) coordination compounds have been reported, the first one being , which is isostructural with the analogous uranium(IV) coordination compound. Other Np(IV) coordination compounds are known, some involving other metals such as cobalt (·8H2O, formed at 400 K) and copper (·6H2O, formed at 600 K). Complex nitrate compounds are also known: the experimenters who produced them in 1986 and 1987 produced single crystals by slow evaporation of the Np(IV) solution at ambient temperature in concentrated nitric acid and excess 2,2′-pyrimidine. The coordination chemistry of neptunium(V) has been extensively researched due to the presence of cation–cation interactions in the solid state, which had been already known for actinyl ions. Some known such compounds include the neptunyl dimer ·8H2O and neptunium glycolate, both of which form green crystals. Neptunium(VI) compounds range from the simple oxalate (which is unstable, usually becoming Np(IV)) to such complicated compounds as the green . Extensive study has been performed on compounds of the form , where M represents a monovalent cation and An is either uranium, neptunium, or plutonium. Since 1967, when neptunium(VII) was discovered, some coordination compounds with neptunium in the +7 oxidation state have been prepared and studied. The first reported such compound was initially characterized as ·"n"H2O in 1968, but was suggested in 1973 to actually have the formula ·2H2O based on the fact that Np(VII) occurs as in aqueous solution. This compound forms dark green prismatic crystals with maximum edge length 0.15–0.4 mm. Most neptunium coordination complexes known in solution involve the element in the +4, +5, and +6 oxidation states: only a few studies have been done on neptunium(III) and (VII) coordination complexes. For the former, NpX2+ and (X = Cl, Br) were obtained in 1966 in concentrated LiCl and LiBr solutions, respectively: for the latter, 1970 experiments discovered that the ion could form sulfate complexes in acidic solutions, such as and ; these were found to have higher stability constants than the neptunyl ion (). A great many complexes for the other neptunium oxidation states are known: the inorganic ligands involved are the halides, iodate, azide, nitride, nitrate, thiocyanate, sulfate, carbonate, chromate, and phosphate. Many organic ligands are known to be able to be used in neptunium coordination complexes: they include acetate, propionate, glycolate, lactate, oxalate, malonate, phthalate, mellitate, and citrate. Analogously to its neighbours, uranium and plutonium, the order of the neptunium ions in terms of complex formation ability is Np4+ > ≥ Np3+ > . (The relative order of the middle two neptunium ions depends on the ligands and solvents used.) The stability sequence for Np(IV), Np(V), and Np(VI) complexes with monovalent inorganic ligands is F− > > SCN− > > Cl− > ; the order for divalent inorganic ligands is > > . These follow the strengths of the corresponding acids. The divalent ligands are more strongly complexing than the monovalent ones. can also form the complex ions [] (M = Al, Ga, Sc, In, Fe, Cr, Rh) in perchloric acid solution: the strength of interaction between the two cations follows the order Fe > In > Sc > Ga > Al. The neptunyl and uranyl ions can also form a complex together. An important of use of 237Np is as a precursor in plutonium production, where it is irradiated with neutrons to create 238Pu, an alpha emitter for radioisotope thermal generators for spacecraft and military applications. 237Np will capture a neutron to form 238Np and beta decay with a half-life of just over two days to 238Pu. 238Pu also exists in sizable quantities in spent nuclear fuel but would have to be separated from other isotopes of plutonium. Irradiating neptunium-237 with electron beams, provoking bremsstrahlung, also produces quite pure samples of the isotope plutonium-236, useful as a tracer to determine plutonium concentration in the environment. Neptunium is fissionable, and could theoretically be used as fuel in a fast neutron reactor or a nuclear weapon, with a critical mass of around 60 kilograms. In 1992, the U.S. Department of Energy declassified the statement that neptunium-237 "can be used for a nuclear explosive device". It is not believed that an actual weapon has ever been constructed using neptunium. As of 2009, the world production of neptunium-237 by commercial power reactors was over 1000 critical masses a year, but to extract the isotope from irradiated fuel elements would be a major industrial undertaking. In September 2002, researchers at the Los Alamos National Laboratory briefly created the first known nuclear critical mass using neptunium in combination with shells of enriched uranium (uranium-235), discovering that the critical mass of a bare sphere of neptunium-237 "ranges from kilogram weights in the high fifties to low sixties," showing that it "is about as good a bomb material as [uranium-235]." The United States Federal government made plans in March 2004 to move America's supply of separated neptunium to a nuclear-waste disposal site in Nevada. 237Np is used in devices for detecting high-energy (MeV) neutrons. Neptunium accumulates in commercial household ionization-chamber smoke detectors from decay of the (typically) 0.2 microgram of americium-241 initially present as a source of ionizing radiation. With a half-life of 432 years, the americium-241 in an ionization smoke detector includes about 3% neptunium after 20 years, and about 15% after 100 years. Neptunium-237 is the most mobile actinide in the deep geological repository environment. This makes it and its predecessors such as americium-241 candidates of interest for destruction by nuclear transmutation. Due to its long half-life, neptunium will become the major contributor of the total radiotoxicity in 10,000 years. As it is unclear what happens to the containment in that long time span, an extraction of the neptunium would minimize the contamination of the environment if the nuclear waste could be mobilized after several thousand years. Neptunium does not have a biological role, as it has a short half-life and occurs only in small traces naturally. Animal tests showed that it is not absorbed via the digestive tract. When injected it concentrates in the bones, from which it is slowly released. Finely divided neptunium metal presents a fire hazard because neptunium is pyrophoric; small grains will ignite spontaneously in air at room temperature.
https://en.wikipedia.org/wiki?curid=21277
Nobelium Nobelium is a synthetic chemical element with the symbol No and atomic number 102. It is named in honor of Alfred Nobel, the inventor of dynamite and benefactor of science. A radioactive metal, it is the tenth transuranic element and is the penultimate member of the actinide series. Like all elements with atomic number over 100, nobelium can only be produced in particle accelerators by bombarding lighter elements with charged particles. A total of twelve nobelium isotopes are known to exist; the most stable is 259No with a half-life of 58 minutes, but the shorter-lived 255No (half-life 3.1 minutes) is most commonly used in chemistry because it can be produced on a larger scale. Chemistry experiments have confirmed that nobelium behaves as a heavier homolog to ytterbium in the periodic table. The chemical properties of nobelium are not completely known: they are mostly only known in aqueous solution. Before nobelium's discovery, it was predicted that it would show a stable +2 oxidation state as well as the +3 state characteristic of the other actinides: these predictions were later confirmed, as the +2 state is much more stable than the +3 state in aqueous solution and it is difficult to keep nobelium in the +3 state. In the 1950s and 1960s, many claims of the discovery of nobelium were made from laboratories in Sweden, the Soviet Union, and the United States. Although the Swedish scientists soon retracted their claims, the priority of the discovery and therefore the naming of the element was disputed between Soviet and American scientists, and it was not until 1997 that International Union of Pure and Applied Chemistry (IUPAC) credited the Soviet team with the discovery, but retained nobelium, the Swedish proposal, as the name of the element due to its long-standing use in the literature. The discovery of element 102 was a complicated process and was claimed by groups from Sweden, the United States, and the Soviet Union. The first complete and incontrovertible report of its detection only came in 1966 from the Joint Institute of Nuclear Research at Dubna (then in the Soviet Union). The first announcement of the discovery of element 102 was announced by physicists at the Nobel Institute in Sweden in 1957. The team reported that they had bombarded a curium target with carbon-13 ions for twenty-five hours in half-hour intervals. Between bombardments, ion-exchange chemistry was performed on the target. Twelve out of the fifty bombardments contained samples emitting (8.5 ± 0.1) MeV alpha particles, which were in drops which eluted earlier than fermium (atomic number "Z" = 100) and californium ("Z" = 98). The half-life reported was 10 minutes and was assigned to either 251102 or 253102, although the possibility that the alpha particles observed were from a presumably short-lived mendelevium ("Z" = 101) isotope created from the electron capture of element 102 was not excluded. The team proposed the name "nobelium" (No) for the new element, which was immediately approved by IUPAC, a decision which the Dubna group characterized in 1968 as being hasty. The following year, scientists at the Lawrence Berkeley National Laboratory repeated the experiment but were unable to find any 8.5 MeV events which were not background effects. In 1959, the Swedish team attempted to explain the Berkeley team's inability to detect element 102 in 1958, maintaining that they did discover it. However, later work has shown that no nobelium isotopes lighter than 259No (no heavier isotopes could have been produced in the Swedish experiments) with a half-life over 3 minutes exist, and that the Swedish team's results are most likely from thorium-225, which has a half-life of 8 minutes and quickly undergoes triple alpha decay to polonium-213, which has a decay energy of 8.53612 MeV. This hypothesis is lent weight by the fact that thorium-225 can easily be produced in the reaction used and would not be separated out by the chemical methods used. Later work on nobelium also showed that the divalent state is more stable than the trivalent one and hence that the samples emitting the alpha particles could not have contained nobelium, as the divalent nobelium would not have eluted with the other trivalent actinides. Thus, the Swedish team later retracted their claim and associated the activity to background effects. The Berkeley team, consisting of Albert Ghiorso, Glenn T. Seaborg, John R. Walton and Torbjørn Sikkeland, then claimed the synthesis of element 102 in 1958. The team used the new heavy-ion linear accelerator (HILAC) to bombard a curium target (95% 244Cm and 5% 246Cm) with 13C and 12C ions. They were unable to confirm the 8.5 MeV activity claimed by the Swedes but were instead able to detect decays from fermium-250, supposedly the daughter of 254102 (produced from the curium-246), which had an apparent half-life of ~3 s. Later 1963 Dubna work confirmed that 254102 could be produced in this reaction, but that its half-life was actually . In 1967, the Berkeley team attempted to defend their work, stating that the isotope found was indeed 250Fm but the isotope that the half-life measurements actually related to was californium-244, granddaughter of 252102, produced from the more abundant curium-244. Energy differences were then attributed to "resolution and drift problems", although these had not been previously reported and should also have influenced other results. 1977 experiments showed that 252102 indeed had a 2.3-second half-life. However, 1973 work also showed that the 250Fm recoil could have also easily been produced from the isomeric transition of 250mFm (half-life 1.8 s) which could also have been formed in the reaction at the energy used. Given this, it is probable that no nobelium was actually produced in this experiment. In 1959, the team continued their studies and claimed that they were able to produce an isotope that decayed predominantly by emission of an 8.3 MeV alpha particle, with a half-life of 3 s with an associated 30% spontaneous fission branch. The activity was initially assigned to 254102 but later changed to 252102. However, they also noted that it was not certain that nobelium had been produced due to difficult conditions. The Berkeley team decided to adopt the proposed name of the Swedish team, "nobelium", for the element. Meanwhile, in Dubna, experiments were carried out in 1958 and 1960 aiming to synthesize element 102 as well. The first 1958 experiment bombarded plutonium-239 and -241 with oxygen-16 ions. Some alpha decays with energies just over 8.5 MeV were observed, and they were assigned to 251,252,253102, although the team wrote that formation of isotopes from lead or bismuth impurities (which would not produce nobelium) could not be ruled out. While later 1958 experiments noted that new isotopes could be produced from mercury, thallium, lead, or bismuth impurities, the scientists still stood by their conclusion that element 102 could be produced from this reaction, mentioning a half-life of under 30 seconds and a decay energy of (8.8 ± 0.5) MeV. Later 1960 experiments proved that these were background effects. 1967 experiments also lowered the decay energy to (8.6 ± 0.4) MeV, but both values are too high to possibly match those of 253No or 254No. The Dubna team later stated in 1970 and again in 1987 that these results were not conclusive. In 1961, Berkeley scientists claimed the discovery of element 103 in the reaction of californium with boron and carbon ions. They claimed the production of the isotope 257103, and also claimed to have synthesized an alpha decaying isotope of element 102 that had a half-life of 15 s and alpha decay energy 8.2 MeV. They assigned this to 255102 without giving a reason for the assignment. The values do not agree with those now known for 255No, although they do agree with those now known for 257No, and while this isotope probably played a part in this experiment, its discovery was inconclusive. Work on element 102 also continued in Dubna, and in 1964, experiments were carried out there to detect alpha-decay daughters of element 102 isotopes by synthesizing element 102 from the reaction of a uranium-238 target with neon ions. The products were carried along a silver catcher foil and purified chemically, and the isotopes 250Fm and 252Fm were detected. The yield of 252Fm was interpreted as evidence that its parent 256102 was also synthesized: as it was noted that 252Fm could also be produced directly in this reaction by the simultaneous emission of an alpha particle with the excess neutrons, steps were taken to ensure that 252Fm could not go directly to the catcher foil. The half-life detected for 256102 was 8 s, which is much higher than the more modern 1967 value of (3.2 ± 0.2) s. Further experiments were conducted in 1966 for 254102, using the reactions 243Am(15N,4n)254102 and 238U(22Ne,6n)254102, finding a half-life of (50 ± 10) s: at that time the discrepancy between this value and the earlier Berkeley value was not understood, although later work proved that the formation of the isomer 250mFm was less likely in the Dubna experiments than at the Berkeley ones. In hindsight, the Dubna results on 254102 were probably correct and can be now considered a conclusive detection of element 102. One more very convincing experiment from Dubna was published in 1966, again using the same two reactions, which concluded that 254102 indeed had a half-life much longer than the 3 seconds claimed by Berkeley. Later work in 1967 at Berkeley and 1971 at the Oak Ridge National Laboratory fully confirmed the discovery of element 102 and clarified earlier observations. In December 1966, the Berkeley group repeated the Dubna experiments and fully confirmed them, and used this data to finally assign correctly the isotopes they had previously synthesized but could not yet identify at the time, and thus claimed to have discovered nobelium in 1958 to 1961. In 1969, the Dubna team carried out chemical experiments on element 102 and concluded that it behaved as the heavier homologue of ytterbium. The Russian scientists proposed the name "joliotium" (Jo) for the new element after Irène Joliot-Curie, who had recently died, creating an element naming controversy that would not be resolved for several decades, which each group using its own proposed names. In 1992, the IUPAC-IUPAP Transfermium Working Group (TWG) reassessed the claims of discovery and concluded that only the Dubna work from 1966 correctly detected and assigned decays to nuclei with atomic number 102 at the time. The Dubna team are therefore officially recognised as the discoverers of nobelium although it is possible that it was detected at Berkeley in 1959. This decision was criticized by Berkeley the following year, calling the reopening of the cases of elements 101 to 103 a "futile waste of time", while Dubna agreed with IUPAC's decision. In 1994, as part of an attempted resolution to the element naming controversy, IUPAC ratified names for elements 101–109. For element 102, it ratified the name "nobelium" (No) on the basis that it had become entrenched in the literature over the course of 30 years and that Alfred Nobel should be commemorated in this fashion. Because of outcry over the 1994 names, which mostly did not respect the choices of the discoverers, a comment period ensued, and in 1995 IUPAC named element 102 "flerovium" (Fl) as part of a new proposal, after either Georgy Flyorov or his eponymous Flerov Laboratory of Nuclear Reactions. This proposal was also not accepted, and in 1997 the name "nobelium" was restored. Today the name "flerovium", with the same symbol, refers to element 114. In the periodic table, nobelium is located to the right of the actinide mendelevium, to the left of the actinide lawrencium, and below the lanthanide ytterbium. Nobelium metal has not yet been prepared in bulk quantities, and bulk preparation is currently impossible. Nevertheless, a number of predictions and some preliminary experimental results have been done regarding its properties. The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have f"n"+1s2 configurations, whereas the latter have f"n"d1s2 configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f136d17s2 configuration over the [Rn]5f147s2 configuration for nobelium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals, although for nobelium this prediction has not yet been confirmed. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number: an effect of this is that nobelium is predominantly divalent instead of trivalent, unlike all the other lanthanides and actinides. In 1986, nobelium metal was estimated to have an enthalpy of sublimation between 126 kJ/mol, a value close to the values for einsteinium, fermium, and mendelevium and supporting the theory that nobelium would form a divalent metal. Like the other divalent late actinides (except the once again trivalent lawrencium), metallic nobelium should assume a face-centered cubic crystal structure. Divalent nobelium metal should have a metallic radius of around 197 pm. Nobelium's melting point has been predicted to be 827 °C, the same value as that estimated for the neighboring element mendelevium. Its density is predicted to be around 9.9 ± 0.4 g/cm3. The chemistry of nobelium is incompletely characterized and is known only in aqueous solution, in which it can take on the +3 or +2 oxidation states, the latter being more stable. It was largely expected before the discovery of nobelium that in solution, it would behave like the other actinides, with the trivalent state being predominant; however, Seaborg predicted in 1949 that the +2 state would also be relatively stable for nobelium, as the No2+ ion would have the ground-state electron configuration [Rn]5f14, including the stable filled 5f14 shell. It took nineteen years before this prediction was confirmed. In 1967, experiments were conducted to compare nobelium's chemical behavior to that of terbium, californium, and fermium. All four elements were reacted with chlorine and the resulting chlorides were deposited along a tube, along which they were carried by a gas. It was found that the nobelium chloride produced was strongly adsorbed on solid surfaces, proving that it was not very volatile, like the chlorides of the other three investigated elements. However, both NoCl2 and NoCl3 were expected to exhibit nonvolatile behavior and hence this experiment was inconclusive as to what the preferred oxidation state of nobelium was. Determination of nobelium's favoring of the +2 state had to wait until the next year, when cation-exchange chromatography and coprecipitation experiments were carried out on around fifty thousand 255No atoms, finding that it behaved differently from the other actinides and more like the divalent alkaline earth metals. This proved that in aqueous solution, nobelium is most stable in the divalent state when strong oxidizers are absent. Later experimentation in 1974 showed that nobelium eluted with the alkaline earth metals, between Ca2+ and Sr2+. Nobelium is the only known f-block element for which the +2 state is the most common and stable one in aqueous solution. This occurs because of the large energy gap between the 5f and 6d orbitals at the end of the actinide series. It is expected that the relativistic stabilization of the 7s subshell greatly destabilizes nobelium dihydride, NoH2, and relativistic stabilisation of the 7p1/2 spinor over the 6d3/2 spinor mean that excited states in nobelium atoms have 7s and 7p contribution instead of the expected 6d contribution. The long No–H distances in the NoH2 molecule and the significant charge transfer lead to extreme ionicity with a dipole moment of 5.94 D for this molecule. In this molecule, nobelium is expected to exhibit main-group-like behavior, specifically acting like an alkaline earth metal with its "n"s2 valence shell configuration and core-like 5f orbitals. Nobelium's complexing ability with chloride ions is most similar to that of barium, which complexes rather weakly. Its complexing ability with citrate, oxalate, and acetate in an aqueous solution of 0.5 M ammonium nitrate is between that of calcium and strontium, although it is somewhat closer to that of strontium. The standard reduction potential of the "E"°(No3+→No2+) couple was estimated in 1967 to be between +1.4 and +1.5 V; it was later found in 2009 to be only about +0.75 V. The positive value shows that No2+ is more stable than No3+ and that No3+ is a good oxidizing agent. While the quoted values for the "E"°(No2+→No0) and "E"°(No3+→No0) vary among sources, the accepted standard estimates are −2.61 and −1.26 V. It has been predicted that the value for the "E"°(No4+→No3+) couple would be +6.5 V. The Gibbs energies of formation for No3+ and No2+ are estimated to be −342 and −480 kJ/mol, respectively. A nobelium atom has 102 electrons, of which three can act as valence electrons. They are expected to be arranged in the configuration [Rn]5f147s2 (ground state term symbol 1S0), although experimental verification of this electron configuration had not yet been made as of 2006. In forming compounds, all the three valence electrons may be lost, leaving behind a [Rn]5f13 core: this conforms to the trend set by the other actinides with their [Rn]5f"n" electron configurations in the tripositive state. Nevertheless, it is more likely that only two valence electrons may be lost, leaving behind a stable [Rn]5f14 core with a filled 5f14 shell. The first ionization potential of nobelium was measured to be at most (6.65 ± 0.07) eV in 1974, based on the assumption that the 7s electrons would ionize before the 5f ones; this value has since not yet been refined further due to nobelium's scarcity and high radioactivity. The ionic radius of hexacoordinate and octacoordinate No3+ had been preliminarily estimated in 1978 to be around 90 and 102 pm respectively; the ionic radius of No2+ has been experimentally found to be 100 pm to two significant figures. The enthalpy of hydration of No2+ has been calculated as 1486 kJ/mol. Twelve isotopes of nobelium are known, with mass numbers 250–260 and 262; all are radioactive. Additionally, nuclear isomers are known for mass numbers 251, 253, and 254. Of these, the longest-lived isotope is 259No with a half-life of 58 minutes, and the longest-lived isomer is 251mNo with a half-life of 1.7 seconds. However, the still undiscovered isotope 261No is predicted to have a still longer half-life of 170 min. Additionally, the shorter-lived 255No (half-life 3.1 minutes) is more often used in chemical experimentation because it can be produced in larger quantities from irradiation of californium-249 with carbon-12 ions. After 259No and 255No, the next most stable nobelium isotopes are 253No (half-life 1.62 minutes), 254No (51 seconds), 257No (25 seconds), 256No (2.91 seconds), and 252No (2.57 seconds). All of the remaining nobelium isotopes have half-lives that are less than a second, and the shortest-lived known nobelium isotope (250No) has a half-life of only 0.25 milliseconds. The isotope 254No is especially interesting theoretically as it is in the middle of a series of prolate nuclei from 231Pa to 279Rg, and the formation of its nuclear isomers (of which two are known) is controlled by proton orbitals such as 2f5/2 which come just above the spherical proton shell; it can be synthesized in the reaction of 208Pb with 48Ca. The half-lives of nobelium isotopes increase smoothly from 250No to 253No. However, a dip appears at 254No, and beyond this the half-lives of even-even nobelium isotopes drop sharply as spontaneous fission becomes the dominant decay mode. For example, the half-life of 256No is almost three seconds, but that of 258No is only 1.2 milliseconds. This shows that at nobelium, the mutual repulsion of protons poses a limit to the region of long-lived nuclei in the actinide series. The even-odd nobelium isotopes mostly continue to have longer half-lives as their mass numbers increase, with a dip in the trend at 257No. The isotopes of nobelium are mostly produced by bombarding actinide targets (uranium, plutonium, curium, californium, or einsteinium), with the exception of nobelium-262, which is produced as the daughter of lawrencium-262. The most commonly used isotope, 255No, can be produced from bombarding curium-248 or californium-249 with carbon-12: the latter method is more common. Irradiating a 350 μg cm−2 target of californium-249 with three trillion (3 × 1012) 73 MeV carbon-12 ions per second for ten minutes can produce around 1200 nobelium-255 atoms. Once the nobelium-255 is produced, it can be separated out in a similar way as used to purify the neighboring actinide mendelevium. The recoil momentum of the produced nobelium-255 atoms is used to bring them physically far away from the target from which they are produced, bringing them onto a thin foil of metal (usually beryllium, aluminium, platinum, or gold) just behind the target in a vacuum: this is usually combined by trapping the nobelium atoms in a gas atmosphere (frequently helium), and carrying them along with a gas jet from a small opening in the reaction chamber. Using a long capillary tube, and including potassium chloride aerosols in the helium gas, the nobelium atoms can be transported over tens of meters. The thin layer of nobelium collected on the foil can then be removed with dilute acid without completely dissolving the foil. The nobelium can then be isolated by exploiting its tendency to form the divalent state, unlike the other trivalent actinides: under typically used elution conditions (bis-(2-ethylhexyl) phosphoric acid (HDEHP) as stationary organic phase and 0.05 M hydrochloric acid as mobile aqueous phase, or using 3 M hydrochloric acid as an eluant from cation-exchange resin columns), nobelium will pass through the column and elute while the other trivalent actinides remain on the column. However, if a direct "catcher" gold foil is used, the process is complicated by the need to separate out the gold using anion-exchange chromatography before isolating the nobelium by elution from chromatographic extraction columns using HDEHP.
https://en.wikipedia.org/wiki?curid=21278
Norwegian Sea The Norwegian Sea () is a marginal sea in the Arctic Ocean, northwest of Norway between the North Sea and the Greenland Sea, adjoining the Barents Sea to the northeast. In the southwest, it is separated from the Atlantic Ocean by a submarine ridge running between Iceland and the Faroe Islands. To the north, the Jan Mayen Ridge separates it from the Greenland Sea. Unlike many other seas, most of the bottom of the Norwegian Sea is not part of a continental shelf and therefore lies at a great depth of about two kilometres on average. Rich deposits of oil and natural gas are found under the sea bottom and are being explored commercially, in the areas with sea depths of up to about one kilometre. The coastal zones are rich in fish that visit the Norwegian Sea from the North Atlantic or from the Barents Sea (cod) for spawning. The warm North Atlantic Current ensures relatively stable and high water temperatures, so that unlike the Arctic seas, the Norwegian Sea is ice-free throughout the year. Recent research has concluded that the large volume of water in the Norwegian Sea with its large heat absorption capacity is more important as a source of Norway's mild winters than the Gulf Stream and its extensions. The International Hydrographic Organization defines the limits of the Norwegian Sea as follows: The Norwegian Sea was formed about 250 million years ago, when the Eurasian plate of Norway and the North American Plate, including Greenland, started to move apart. The existing narrow shelf sea between Norway and Greenland began to widen and deepen. The present continental slope in the Norwegian Sea marks the border between Norway and Greenland as it stood approximately 250 million years ago. In the north it extends east from Svalbard and on the southwest between Britain and the Faroes. This continental slope contains rich fishing grounds and numerous coral reefs. Settling of the shelf after the separation of the continents has resulted in landslides, such as the Storegga Slide about 8,000 years ago that induced a major tsunami. The coasts of the Norwegian Sea were shaped during the last Ice Age. Large glaciers several kilometres high pushed into the land, forming fjords, removing the crust into the sea, and thereby extending the continental slopes. This is particularly clear off the Norwegian coast along Helgeland and north to the Lofoten Islands. The Norwegian continental shelf is between 40 and 200 kilometres wide, and has a different shape from the shelves in the North Sea and Barents Sea. It contains numerous trenches and irregular peaks, which usually have an amplitude of less than 100 metres, but can reach up to 400 metres. They are covered with a mixture of gravel, sand, and mud, and the trenches are used by fish as spawning grounds. Deeper into the sea, there are two deep basins separated by a low ridge (its deepest point at 3,000 m) between the Vøring Plateau and Jan Mayen island. The southern basin is larger and deeper, with large areas between 3,500 and 4,000 metres deep. The northern basin is shallower at 3,200–3,300 metres, but contains many individual sites going down to 3,500 metres. Submarine thresholds and continental slopes mark the borders of these basins with the adjacent seas. To the south lies the European continental shelf and the North Sea, to the east is the Eurasian continental shelf with the Barents Sea. To the west, the Scotland-Greenland Ridge separates the Norwegian Sea from the North Atlantic. This ridge is on average only 500 metres deep, only in a few places reaching the depth of 850 metres. To the north lie the Jan Mayen Ridge and Mohns Ridge, which lie at a depth of 2,000 metres, with some trenches reaching depths of about 2,600 meters. Four major water masses originating in the Atlantic and Arctic oceans meet in the Norwegian Sea, and the associated currents are of fundamental importance for the global climate. The warm, salty North Atlantic Current flows in from the Atlantic Ocean, and the colder and less saline Norwegian Current originates in the North Sea. The so-called East Iceland Current transports cold water south from the Norwegian Sea toward Iceland and then east, along the Arctic Circle; this current occurs in the middle water layer. Deep water flows into the Norwegian Sea from the Greenland Sea. The tides in the sea are semi-diurnal; that is, they rise twice a day, to a height of about 3.3 metres. The hydrology of the upper water layers is largely determined by the flow from the North Atlantic. It reaches a speed of 10 Sv (1 Sv = million m3/s) and its maximum depth is 700 metres at the Lofoten Islands, but normally it is within 500 meters. Part of it comes through the Faroe-Shetland Channel and has a comparatively high salinity of 35.3‰ (parts per thousand). This current originates in the North Atlantic Current and passes along the European continental slope; increased evaporation due to the warm European climate results in the elevated salinity. Another part passes through the Greenland-Scotland trench between the Faroe Islands and Iceland; this water has a mean salinity between 35 and 35.2‰. The flow shows strong seasonal variations and can be twice as high in winter as in summer. While at the Faroe-Shetland Channel it has a temperature of about 9.5 °C; it cools to about 5 °C at Svalbard and releases this energy (about 250 terawatts) to the environment. The current flowing from the North Sea originates in the Baltic Sea and thus collects most of the drainage from northern Europe; this contribution is however relatively small. The temperature and salinity of this current show strong seasonal and annual fluctuations. Long-term measurements within the top 50 metres near the coast show a maximum temperature of 11.2 °C at the 63° N parallel in September and a minimum of 3.9 °C at the North Cape in March. The salinity varies between 34.3 and 34.6‰ and is lowest in spring owing to the inflow of melted snow from rivers. The largest rivers discharging into the sea are Namsen, Ranelva and Vefsna. They are all relatively short, but have a high discharge rate owing to their steep mountainous nature. A portion of the warm surface water flows directly, within the West Spitsbergen Current, from the Atlantic Ocean, off the Greenland Sea, to the Arctic Ocean. This current has a speed of 3–5 Sv and has a large impact on the climate. Other surface water (~1 Sv) flows along the Norwegian coast in the direction of the Barents Sea. This water may cool enough in the Norwegian Sea to submerge into the deeper layers; there it displaces water that flows back into the North Atlantic. Arctic water from the East Iceland Current is mostly found in the southwestern part of the sea, near Greenland. Its properties also show significant annual fluctuations, with long-term average temperature being below 3 °C and salinity between 34.7 and 34.9‰. The fraction of this water on the sea surface depends on the strength of the current, which in turn depends on the pressure difference between the Icelandic Low and Azores High: the larger the difference, the stronger the current. The Norwegian Sea is connected with the Greenland Sea and the Arctic Ocean by the 2,600-metre deep Fram Strait. The Norwegian Sea Deep Water (NSDW) occurs at depths exceeding 2,000 metres; this homogeneous layer with a salinity of 34.91‰ experiences little exchange with the adjacent seas. Its temperature is below 0 °C and drops to −1 °C at the ocean floor. Compared with the deep waters of the surrounding seas, NSDW has more nutrients but less oxygen and is relatively old. The weak deep-water exchange with the Atlantic Ocean is due to the small depth of the relatively flat Greenland-Scotland Ridge between Scotland and Greenland, an offshoot of the Mid-Atlantic Ridge. Only four areas of the Greenland-Scotland Ridge are deeper than 500 metres: the Faroe Bank Channel (about 850 metres), some parts of the Iceland-Faroe Ridge (about 600 metres), the Wyville-Thomson Ridge (620 metres), and areas between Greenland and the Denmark Strait (850 meters) – this is much shallower than the Norwegian Sea. Cold deep water flows into the Atlantic through various channels: about 1.9 Sv through the Faroe Bank channel, 1.1 Sv through the Iceland-Faroe channel, and 0.1 Sv via the Wyville-Thomson Ridge. The turbulence that occurs when the deep water falls behind the Greenland-Scotland Ridge into the deep Atlantic basin mixes the adjacent water layers and forms the North Atlantic Deep Water, one of two major deep-sea currents providing the deep ocean with oxygen. The thermohaline circulation affects the climate in the Norwegian Sea, and the regional climate can significantly deviate from average. There is also a difference of about 10 °C between the sea and the coastline. Temperatures rose between 1920 and 1960, and the frequency of storms decreased in this period. The storminess was relatively high between 1880 and 1910, decreased significantly in 1910–1960, and then recovered to the original level. In contrast to the Greenland Sea and Arctic seas, the Norwegian Sea is ice-free year round, owing to its warm currents. The convection between the relatively warm water and cold air in the winter plays an important role in the Arctic climate. The 10-degree July isotherm (air temperature line) runs through the northern boundary of the Norwegian Sea and is often taken as the southern boundary of the Arctic. In winter, the Norwegian Sea generally has the lowest air pressure in the entire Arctic and where most Icelandic Low depressions form. The water temperature in most parts of the sea is 2–7 °C in February and 8–12 °C in August. The Norwegian Sea is a transition zone between boreal and Arctic conditions, and thus contains flora and fauna characteristic of both climatic regions. The southern limit of many Arctic species runs through the North Cape, Iceland, and the center of the Norwegian Sea, while the northern limit of boreal species lies near the borders of the Greenland Sea with the Norwegian Sea and Barents Sea; that is, these areas overlap. Some species like the scallop "Chlamys islandica" and capelin tend to occupy this area between the Atlantic and Arctic oceans. Most of the aquatic life in the Norwegian Sea is concentrated in the upper layers. Estimates for the entire North Atlantic are that only 2% of biomass is produced at depths below 1,000 metres and only 1.2% occurs near the sea floor. The blooming of the phytoplankton is dominated by chlorophyll and peaks around 20 May. The major phytoplankton forms are diatoms, in particular the genus "Thalassiosira" and "Chaetoceros". After the spring bloom the haptophytes of the genus "Phaecocystis pouchetti" become dominant. Zooplankton is mostly represented by the copepods "Calanus finmarchicus" and "Calanus hyperboreus", where the former occurs about four times more often than the latter and is mostly found in the Atlantic streams, whereas "C. hyperboreus" dominates the Arctic waters; they are the main diet of most marine predators. The most important krill species are "Meganyctiphanes norvegica", "Thyssanoessa inermis", and "Thyssanoessa longicaudata". In contrast to the Greenland Sea, there is a significant presence of calcareous plankton (Coccolithophore and Globigerinida) in the Norwegian Sea. Plankton production strongly fluctuates between years. For example, "C. finmarchicus" yield was 28 g/m² (dry weight) in 1995 and only 8 g/m² in 1997; this correspondingly affected the population of all its predators. Shrimp of the species "Pandalus borealis" play an important role in the diet of fish, particularly cod and blue whiting, and mostly occur at depths between 200 and 300 metres. A special feature of the Norwegian Sea is extensive coral reefs of "Lophelia pertusa", which provide shelter to various fish species. Although these corals are widespread in many peripheral areas of the North Atlantic, they never reach such amounts and concentrations as at the Norwegian continental slopes. However, they are at risk due to increasing trawling, which mechanically destroys the coral reefs. The Norwegian coastal waters are the most important spawning ground of the herring populations of the North Atlantic, and the hatching occurs in March. The eggs float to the surface and are washed off the coast by the northward current. Whereas a small herring population remains in the fjords and along the northern Norwegian coast, the majority spends the summer in the Barents Sea, where it feeds on the rich plankton. Upon reaching puberty, herring returns to the Norwegian Sea. The herring stock varies greatly between years. It increased in the 1920s owing to the milder climate and then collapsed in the following decades until 1970; the decrease was, however, at least partly caused by overfishing. The biomass of young hatched herring declined from 11 million tonnes in 1956 to almost zero in 1970; that affected the ecosystem not only of the Norwegian Sea but also of the Barents Sea. Enforcement of environmental and fishing regulations has resulted in partial recovery of the herring populations since 1987. This recovery was accompanied by a decline of capelin and cod stocks. While the capelin benefited from the reduced fishing, the temperature rise in the 1980s and competition for food with the herring resulted in a near disappearance of young capelin from the Norwegian Sea. Meanwhile, the elderly capelin population was quickly fished out. This also reduced the population of cod – a major predator of capelin – as the herring was still too small in numbers to replace the capelin in the cod's diet. Blue whiting ("Micromesistius poutassou") has benefited from the decline of the herring and capelin stocks as it assumed the role of major predator of plankton. The blue whiting spawns near the British Isles. The sea currents carry their eggs to the Norwegian Sea, and the adults also swim there to benefit from the food supply. The young spend the summer and the winter until February in Norwegian coastal waters and then return to the warmer waters west of Scotland. The Norwegian Arctic cod mostly occurs in the Barents Sea and at the Svalbard Archipelago. In the rest of the Norwegian Sea, it is found only during the reproduction season, at the Lofoten Islands, whereas "Pollachius virens" and haddock spawn in the coastal waters. Mackerel is an important commercial fish. The coral reefs are populated by different species of the genus "Sebastes". Significant numbers of minke, humpback, sei, and orca whales are present in the Norwegian Sea, and white-beaked dolphins occur in the coastal waters. Orcas and some other whales visit the sea in the summer months for feeding; their population is closely related to the herring stocks, and they follow the herring schools within the sea. With a total population of about 110,000, minke whales are by far the most common whales in the sea. They are hunted by Norway and Iceland, with a quota of about 1,000 per year in Norway. In contrast to the past, nowadays primarily their meat is consumed, rather than fat and oil. The bowhead whale used to be a major plankton predator, but it almost disappeared from the Norwegian Sea after intense whaling in the 19th century, and was temporarily extinct in the entire North Atlantic. Similarly, the blue whale used to form large groups between Jan Mayen and Spitsbergen, but is hardly present nowadays. Observations of northern bottlenose whales in the Norwegian Sea are rare. Other large animals of the sea are hooded and harp seals and squid. Important waterfowl species of the Norwegian Sea are puffin, kittiwake and guillemot. Puffins and guillemots also suffered from the collapse of the herring population, especially the puffins on the Lofoten Islands. The latter hardly had an alternative to herring and their population was approximately halved between 1969 and 1987. Norway, Iceland, and Denmark/Faroe Islands share the territorial waters of the Norwegian Sea, with the largest part belonging to the first. Norway has claimed twelve-mile limit as territorial waters since 2004 and an exclusive economic zone of 200 miles since 1976. Consequently, due to the Norwegian islands of Svalbard and Jan Mayen, the southeast, northeast and northwest edge of the sea fall within Norway. The southwest border is shared between Iceland and Denmark/Faroe Islands. The largest damage to the Norwegian Sea was caused by extensive fishing, whaling, and pollution. The British nuclear complex of Sellafield is one of the greatest polluters, discharging radioactive waste into the sea. Other contamination is mostly by oil and toxic substances, but also from the great number of ships sunk during the two world wars. The environmental protection of the Norwegian Sea is mainly regulated by the OSPAR Convention. Fishing has been practised near the Lofoten archipelago for hundreds of years. The coastal waters of the remote Lofoten islands are one of the richest fishing areas in Europe, as most of the Atlantic cod swims to the coastal waters of Lofoten in the winter to spawn. So in the 19th century, dried cod was one of Norway's main exports and by far the most important industry in northern Norway. Strong sea currents, maelstroms, and especially frequent storms made fishing a dangerous occupation: several hundred men died on the "Fatal Monday" in March 1821, 300 of them from a single parish, and about a hundred boats with their crews were lost within a short time in April 1875. Whaling was also important for the Norwegian Sea. In the early 1600s, the Englishman Stephen Bennet started hunting walrus at Bear Island. In May 1607 the Muscovy Company, while looking for the Northwest Passage and exploring the sea, discovered the large populations of walrus and whales in the Norwegian Sea and started hunting them in 1610 near Spitsbergen. Later in the 17th century, Dutch ships started hunting bowhead whales near Jan Mayen; the bowhead population between Svalbard and Jan Mayen was then about 25,000 individuals. Britons and Dutch were then joined by Germans, Danes, and Norwegians. Between 1615 and 1820, the waters between Jan Mayen, Svalbard, Bear Island, and Greenland, between the Norwegian, Greenland, and Barents Seas, were the most productive whaling area in the world. However, extensive hunting had wiped out the whales in that region by the early 20th century. For many centuries, the Norwegian Sea was regarded as the edge of the known world. The disappearance of ships there, due to the natural disasters, induced legends of monsters that stopped and sank ships (kraken). As late as in 1845, the "Encyclopædia metropolitana" contained a multi-page review by Erik Pontoppidan (1698–1764) on ship-sinking sea monsters half a mile in size. Many legends might be based on the work "Historia de gentibus septentrionalibus" of 1539 by Olaus Magnus, which described the kraken and maelstroms of the Norwegian Sea. The kraken also appears in Alfred Tennyson's poem of the same name, in Herman Melville's "Moby Dick", and in "Twenty Thousand Leagues Under the Sea" by Jules Verne. Between the Lofoten islands of Moskenesøya and Værøy, at the tiny Mosken island, lies the Moskenstraumen – a system of tidal eddies and a whirlpool called a maelstrom. With a speed on the order of (the value strongly varies between sources), it is one of the strongest maelstroms in the world. It was described in the 13th century in the Old Norse Poetic Edda and remained an attractive subject for painters and writers, including Edgar Allan Poe, Walter Moers and Jules Verne. The word was introduced into the English language by Poe in his story "A Descent into the Maelström" (1841) describing the Moskenstraumen. The Moskenstraumen is created as a result of a combination of several factors, including the tides, the position of the Lofoten, and the underwater topography; unlike most other whirlpools, it is located in the open sea rather than in a channel or bay. With a diameter of 40–50 metres, it can be dangerous even in modern times to small fishing vessels that might be attracted by the abundant cod feeding on the microorganisms sucked in by the whirlpool. The fish-rich coastal waters of northern Norway have long been known and attracted skilled sailors from Iceland and Greenland. Thus most settlements in Iceland and Greenland were on the west coasts of the islands, which were also warmer due to the Atlantic currents. The first reasonably reliable map of northern Europe, the Carta marina of 1539, represents the Norwegian Sea as coastal waters and shows nothing north of the North Cape. The Norwegian Sea off the coast regions appeared on the maps in the 17th century as an important part of the then sought Northern Sea Route and a rich whaling ground. Jan Mayen island was discovered in 1607 and become an important base of Dutch whalers. The Dutchman Willem Barents discovered Bear Island and Svalbard, which was then used by Russian whalers called pomors. The islands on the edge of the Norwegian Sea have been rapidly divided between nations. During the peaks of whaling, some 300 ships with 12,000 crew members were yearly visiting Svalbard. The first depth measurements of the Norwegian Sea were performed in 1773 by Constantine Phipps aboard HMS "Racehorse", as a part of his North Pole expedition. Systematic oceanographic research in the Norwegian Sea started in the late 19th century, when declines in the yields of cod and herring off the Lofoten prompted the Norwegian government to investigate the matter. The zoologist Georg Ossian Sars and meteorologist Henrik Mohn persuaded the government in 1874 to send out a scientific expedition, and between 1876 and 1878 they explored much of the sea aboard "Vøringen". The data obtained allowed Mohn to establish the first dynamic model of ocean currents, which incorporated winds, pressure differences, sea water temperature, and salinity and agreed well with later measurements. In 2019, deposits of iron, copper, zink and cobalt were found on the Mohn Ridge, likely from hydrothermal vents. Until the 20th century, the coasts of the Norwegian Sea were sparsely populated and therefore shipping in the sea was mostly focused on fishing, whaling, and occasional coastal transportation. Since the late 19th century, the Norwegian Coastal Express sea line has been established, connecting the more densely populated south with the north of Norway by at least one trip a day. The importance of shipping in the Norwegian Sea also increased with the expansion of the Russian and Soviet navies in the Barents Sea and development of international routes to the Atlantic through the Baltic Sea, Kattegat, Skagerrak, and North Sea. The Norwegian Sea is ice-free and provides a direct route from the Atlantic to the Russian ports in the Arctic (Murmansk, Archangel, and Kandalaksha), which are directly linked to central Russia. This route was extensively used for supplies during World War II – of 811 US ships, 720 reached Russian ports, bringing some 4 million tonnes of cargo that included about 5,000 tanks and 7,000 aircraft. The Allies lost 18 convoys and 89 merchant ships on this route. The major operations of the German Navy against the convoys included PQ 17 in July 1942, the Battle of the Barents Sea in December 1942, and the Battle of the North Cape in December 1943 and were carried out around the border between the Norwegian Sea and Barents Sea, near the North Cape. Navigation across the Norwegian Sea declined after World War II and intensified only in the 1960s–70s with the expansion of the Soviet Northern Fleet, which was reflected in major joint naval exercises of the Soviet Northern Baltic fleets in the Norwegian Sea. The sea was the gateway for the Soviet Navy to the Atlantic Ocean and thus to the United States, and the major Soviet port of Murmansk was just behind the border of the Norwegian and Barents Sea. The countermeasures by the NATO countries resulted in a significant naval presence in the Norwegian Sea and intense cat-and-mouse games between Soviet and NATO aircraft, ships, and especially submarines. A relic of the Cold War in the Norwegian Sea, the Soviet nuclear submarine K-278 Komsomolets, sank in 1989 southwest of Bear Island, at the border of the Norwegian and Barents seas, with radioactive material onboard that poses potential danger to flora and fauna. The Norwegian Sea is part of the Northern Sea Route for ships from European ports to Asia. The travel distance from Rotterdam to Tokyo is via the Suez Canal and only through the Norwegian Sea. Sea ice is a common problem in the Arctic seas, but ice-free conditions along the entire northern route were observed at the end of August 2008. Russia is planning to expand its offshore oil production in the Arctic, which should increase the traffic of tankers through the Norwegian Sea to markets in Europe and America; it is expected that the number of oil shipments through the northern Norwegian Sea will increase from 166 in 2002 to 615 in 2015. The most important products of the Norwegian Sea are no longer fish, but oil and especially gas found under the ocean floor. Norway started undersea oil production in 1993, followed by development of the Huldra gas field in 2001. The large depth and harsh waters of the Norwegian Sea pose significant technical challenges for offshore drilling. Whereas drilling at depths exceeding 500 meters has been conducted since 1995, only a few deep gas fields have been explored commercially. The most important current project is Ormen Lange (depth 800-1,100 m), where gas production started in 2007. With reserves of 1.4 cubic feet, it is the major Norwegian gas field. It is connected to the Langeled pipeline, currently the world's longest underwater pipeline, and thus to a major European gas pipeline network. Several other gas fields are being developed. A particular challenge is the Kristin field, where the temperature is as high as 170 °C and the gas pressure exceeds 900 bar (900 times the normal pressure). Further north are Norne and Snøhvit.
https://en.wikipedia.org/wiki?curid=21281
Gilles Deleuze Gilles Deleuze (; ; 18 January 1925 – 4 November 1995) was a French philosopher who, from the early 1950s until his death in 1995, wrote on philosophy, literature, film, and fine art. His most popular works were the two volumes of "Capitalism and Schizophrenia": "Anti-Oedipus" (1972) and "A Thousand Plateaus" (1980), both co-written with psychoanalyst Félix Guattari. His metaphysical treatise "Difference and Repetition" (1968) is considered by many scholars to be his magnum opus. An important part of Deleuze's oeuvre is devoted to the reading of other philosophers: the Stoics, Leibniz, Hume, Kant, Nietzsche, and Bergson, with particular influence derived from Spinoza. A. W. Moore, citing Bernard Williams's criteria for a great thinker, ranks Deleuze among the "greatest philosophers". Although he once characterized himself as a "pure metaphysician", his work has influenced a variety of disciplines across the humanities, including philosophy, art, and literary theory, as well as movements such as post-structuralism and postmodernism. Deleuze was born into a middle-class family in Paris and lived there for most of his life. His initial schooling was undertaken during World War II, during which time he attended the Lycée Carnot. He also spent a year in khâgne at the Lycée Henri IV. During the Nazi occupation of France, Deleuze's older brother, Georges, was arrested for his participation in the French Resistance, and died while in transit to a concentration camp. In 1944, Deleuze went to study at the Sorbonne. His teachers there included several noted specialists in the history of philosophy, such as Georges Canguilhem, Jean Hyppolite, Ferdinand Alquié, and Maurice de Gandillac. Deleuze's lifelong interest in the canonical figures of modern philosophy owed much to these teachers. Deleuze passed the agrégation in philosophy in 1948, and taught at various lycées (Amiens, Orléans, Louis le Grand) until 1957, when he took up a position at the University of Paris. In 1953, he published his first monograph, "Empiricism and Subjectivity", on David Hume. This monograph was based on his DES ("diplôme d'études supérieures") thesis, roughly equivalent to an MA thesis) which was conducted under the direction of Jean Hyppolite and Georges Canguilhem. From 1960 to 1964, he held a position at the Centre National de Recherche Scientifique. During this time he published the seminal "Nietzsche and Philosophy" (1962) and befriended Michel Foucault. From 1964 to 1969, he was a professor at the University of Lyon. In 1968, Deleuze defended his dissertations amid the ongoing May 68 demonstrations, and later published his two dissertations, "Difference and Repetition" (supervised by Gandillac) and "Expressionism in Philosophy: Spinoza" (supervised by Alquié). In 1969, he was appointed to the University of Paris VIII at Vincennes/St. Denis, an experimental school organized to implement educational reform. This new university drew a number of well-known academics, including Foucault (who suggested Deleuze's hiring) and the psychoanalyst Félix Guattari. Deleuze taught at Paris VIII until his retirement in 1987. Deleuze was an atheist. He married Denise Paul "Fanny" Grandjouan in 1956. According to James Miller, Deleuze betrayed little visible interest in actually "doing" many of the risky things he so vividly conjured up in his lectures and writing. Married, with two children, he outwardly lived the life of a conventional French professor. His most conspicuous eccentricity was his fingernails: these he kept long and untrimmed because, as he once explained, he lacked "normal protective fingerprints", and therefore could not "touch an object, particularly a piece of cloth, with the pads of my fingers without sharp pain". However, to another interlocutor Deleuze claimed his fingernails were an homage to the Russian author Pushkin. When once asked to talk about his life, he replied: "Academics' lives are seldom interesting." Deleuze concludes his reply to this critic thus: Deleuze, who had suffered from respiratory ailments from a young age, developed tuberculosis in 1968 and underwent lung removal. He suffered increasingly severe respiratory symptoms for the rest of his life. In the last years of his life, simple tasks such as writing required laborious effort. On 4 November 1995, he committed suicide, throwing himself from the window of his apartment. Prior to his death, Deleuze had announced his intention to write a book entitled "La Grandeur de Marx" ("The Greatness of Marx"), and left behind two chapters of an unfinished project entitled "Ensembles and Multiplicities" (these chapters have been published as the essays "Immanence: A Life" and "The Actual and the Virtual"). He is buried in the cemetery of the village of Saint-Léonard-de-Noblat. Deleuze's works fall into two groups: on one hand, monographs interpreting the work of other philosophers (Baruch Spinoza, Gottfried Wilhelm Leibniz, David Hume, Immanuel Kant, Friedrich Nietzsche, Henri Bergson, Michel Foucault) and artists (Marcel Proust, Franz Kafka, Francis Bacon); on the other, eclectic philosophical tomes organized by concept (e.g., difference, sense, events, schizophrenia, economy, cinema, desire, philosophy). However, both of these aspects are seen by his critics and analysts as often overlapping, in particular due to his prose and the unique mapping of his books that allow for multifaceted readings. Deleuze's main philosophical project in the works he wrote prior to his collaborations with Guattari can be summarized as an inversion of the traditional metaphysical relationship between identity and difference. Traditionally, difference is seen as derivative from identity: e.g., to say that "X is different from Y" assumes some X and Y with at least relatively stable identities (as in Plato's forms). To the contrary, Deleuze claims that all identities are effects of difference. Identities are neither logically nor metaphysically prior to difference, Deleuze argues, "given that there exist differences of nature between things of the same genus." That is, not only are no two things ever the same, the categories we use to identify individuals in the first place derive from differences. Apparent identities such as "X" are composed of endless series of differences, where "X" = "the difference between x and xformula_1", and "xformula_1" = "the difference between...", and so forth. Difference, in other words, goes all the way down. To confront reality honestly, Deleuze argues, we must grasp beings exactly as they are, and concepts of identity (forms, categories, resemblances, unities of apperception, predicates, etc.) fail to attain what he calls "difference in itself." "If philosophy has a positive and direct relation to things, it is only insofar as philosophy claims to grasp the thing itself, according to what it is, in its difference from everything it is not, in other words, in its "internal difference"." Like Kant, Deleuze considers traditional notions of space and time as unifying forms imposed by the subject. He therefore concludes that pure difference is non-spatio-temporal; it is an idea, what Deleuze calls "the virtual". (The coinage refers to Proust's definition of what is constant in both the past and the present: "real without being actual, ideal without being abstract.") While Deleuze's virtual ideas superficially resemble Plato's forms and Kant's ideas of pure reason, they are not originals or models, nor do they transcend possible experience; instead they are the conditions of actual experience, the internal difference in itself. "The concept they [the conditions] form is identical to its object." A Deleuzean idea or concept of difference is therefore not a wraith-like abstraction of an experienced thing, it is a real system of differential relations that creates actual spaces, times, and sensations. Thus, Deleuze at times refers to his philosophy as a transcendental empiricism (), alluding to Kant. In Kant's transcendental idealism, experience only makes sense when organized by forms of sensibility (namely, space and time) and intellectual categories (such as causality). Assuming the content of these forms and categories to be qualities of the world as it exists independently of our perceptual access, according to Kant, spawns seductive but senseless metaphysical beliefs (for example, extending the concept of causality beyond possible experience results in unverifiable speculation about a first cause). Deleuze inverts the Kantian arrangement: experience exceeds our concepts by presenting novelty, and this raw experience of difference actualizes an idea, unfettered by our prior categories, forcing us to invent new ways of thinking (see "Epistemology"). Simultaneously, Deleuze claims that being is univocal, i.e., that all of its senses are affirmed in one voice. Deleuze borrows the doctrine of "ontological univocity" from the medieval philosopher John Duns Scotus. In medieval disputes over the nature of God, many eminent theologians and philosophers (such as Thomas Aquinas) held that when one says that "God is good", God's goodness is only analogous to human goodness. Scotus argued to the contrary that when one says that "God is good", the goodness in question is exactly the same sort of goodness that is meant when one says "Jane is good". That is, God only differs from us in degree, and properties such as goodness, power, reason, and so forth are univocally applied, regardless of whether one is talking about God, a person, or a flea. Deleuze adapts the doctrine of univocity to claim that being is, univocally, difference. "With univocity, however, it is not the differences which are and must be: it is being which is Difference, in the sense that it is said of difference. Moreover, it is not we who are univocal in a Being which is not; it is we and our individuality which remains equivocal in and for a univocal Being." Here Deleuze at once echoes and inverts Spinoza, who maintained that everything that exists is a modification of the one substance, God or Nature. For Deleuze, there is no one substance, only an always-differentiating process, an origami cosmos, always folding, unfolding, refolding. Deleuze summarizes this ontology in the paradoxical formula "pluralism = monism". "Difference and Repetition" (1968) is Deleuze's most sustained and systematic attempt to work out the details of such a metaphysics, but his other works develop similar ideas. In "Nietzsche and Philosophy" (1962), for example, reality is a play of forces; in "Anti-Oedipus" (1972), a "body without organs"; in "What is Philosophy?" (1991), a "plane of immanence" or "chaosmos". Deleuze's unusual metaphysics entails an equally atypical epistemology, or what he calls a transformation of "the image of thought". According to Deleuze, the traditional image of thought, found in philosophers such as Aristotle, René Descartes, and Edmund Husserl, misconceives of thinking as a mostly unproblematic business. Truth may be hard to discover—it may require a life of pure theorizing, or rigorous computation, or systematic doubt—but thinking is able, at least in principle, to correctly grasp facts, forms, ideas, etc. It may be practically impossible to attain a God's-eye, neutral point of view, but that is the ideal to approximate: a disinterested pursuit that results in a determinate, fixed truth; an orderly extension of common sense. Deleuze rejects this view as papering over the metaphysical flux, instead claiming that genuine thinking is a violent confrontation with reality, an involuntary rupture of established categories. Truth changes what we think; it alters what we think is possible. By setting aside the assumption that thinking has a natural ability to recognize the truth, Deleuze says, we attain a "thought without image", a thought always determined by problems rather than solving them. "All this, however, presupposes codes or axioms which do not result by chance, but which do not have an intrinsic rationality either. It's just like theology: everything about it is quite rational if you accept sin, the immaculate conception, and the incarnation. Reason is always a region carved out of the irrational—not sheltered from the irrational at all, but traversed by it and only defined by a particular kind of relationship among irrational factors. Underneath all reason lies delirium, and drift." "The Logic of Sense", published in 1969, is one of Deleuze's most peculiar works in the field of epistemology. Michel Foucault, in his essay "Theatrum Philosophicum" about the book, attributed this to how he begins with his metaphysics but approaches it through language and truth; the book is focused on "the simple condition that instead of denouncing metaphysics as the neglect of being, we force it to speak of extrabeing". In it, he refers to epistemological paradoxes: in the first series, as he analyzes Lewis Carroll's "Alice in Wonderland", he remarks that "the personal self requires God and the world in general. But when substantives and adjectives begin to dissolve, when the names of pause and rest are carried away by the verbs of pure becoming and slide into the language of events, all identity disappears from the self, the world, and God." Deleuze's peculiar readings of the history of philosophy stem from this unusual epistemological perspective. To read a philosopher is no longer to aim at finding a single, correct interpretation, but is instead to present a philosopher's attempt to grapple with the problematic nature of reality. "Philosophers introduce new concepts, they explain them, but they don't tell us, not completely anyway, the problems to which those concepts are a response. [...] The history of philosophy, rather than repeating what a philosopher says, has to say what he must have taken for granted, what he didn't say but is nonetheless present in what he did say." Likewise, rather than seeing philosophy as a timeless pursuit of truth, reason, or universals, Deleuze defines philosophy as the creation of concepts. For Deleuze, concepts are not identity conditions or propositions, but metaphysical constructions that define a range of thinking, such as Plato's ideas, Descartes's "cogito", or Kant's doctrine of the faculties. A philosophical concept "posits itself and its object at the same time as it is created." In Deleuze's view, then, philosophy more closely resembles practical or artistic production than it does an adjunct to a definitive scientific description of a pre-existing world (as in the tradition of John Locke or Willard Van Orman Quine). In his later work (from roughly 1981 onward), Deleuze sharply distinguishes art, philosophy, and science as three distinct disciplines, each analyzing reality in different ways. While philosophy creates concepts, the arts create novel qualitative combinations of sensation and feeling (what Deleuze calls "percepts" and "affects"), and the sciences create quantitative theories based on fixed points of reference such as the speed of light or absolute zero (which Deleuze calls "functives"). According to Deleuze, none of these disciplines enjoy primacy over the others: they are different ways of organizing the metaphysical flux, "separate melodic lines in constant interplay with one another." For example, Deleuze does not treat cinema as an art representing an external reality, but as an ontological practice that creates different ways of organizing movement and time. Philosophy, science, and art are equally, and essentially, creative and practical. Hence, instead of asking traditional questions of identity such as "is it true?" or "what is it?", Deleuze proposes that inquiries should be functional or practical: "what does it do?" or "how does it work?" In ethics and politics, Deleuze again echoes Spinoza, albeit in a sharply Nietzschean key. In a classical liberal model of society, morality begins from individuals, who bear abstract natural rights or duties set by themselves or a God. Following his rejection of any metaphysics based on identity, Deleuze criticizes the notion of an individual as an arresting or halting of differentiation (as the etymology of the word "individual" suggests). Guided by the naturalistic ethics of Spinoza and Nietzsche, Deleuze instead seeks to understand individuals and their moralities as products of the organization of pre-individual desires and powers. In the two volumes of "Capitalism and Schizophrenia", "Anti-Oedipus" (1972) and "A Thousand Plateaus" (1980), Deleuze and Guattari describe history as a congealing and regimentation of "desiring-production" (a concept combining features of Freudian drives and Marxist labor) into the modern individual (typically neurotic and repressed), the nation-state (a society of continuous control), and capitalism (an anarchy domesticated into infantilizing commodification). Deleuze, following Karl Marx, welcomes capitalism's destruction of traditional social hierarchies as liberating, but inveighs against its homogenization of all values to the aims of the market. The first part of "Capitalism and Schizophrenia" undertakes a universal history and posits the existence of a separate socius (the social body that takes credit for production) for each mode of production: the earth for the tribe, the body of the despot for the empire, and capital for capitalism." In his 1990 essay "Postscript on the Societies of Control" ("Post-scriptum sur les sociétés de contrôle"), Deleuze builds on Foucault's notion of the society of discipline to argue that society is undergoing a shift in structure and control. Where societies of discipline were characterized by discrete physical enclosures (such as schools, factories, prisons, office buildings, etc.), institutions and technologies introduced since World War II have dissolved the boundaries between these enclosures. As a result, social coercion and discipline have moved into the lives of individuals considered as "masses, samples, data, markets, or 'banks'." The mechanisms of modern societies of control are described as continuous, following and tracking individuals throughout their existence via transaction records, mobile location tracking, and other personally identifiable information. But how does Deleuze square his pessimistic diagnoses with his ethical naturalism? Deleuze claims that standards of value are internal or immanent: to live well is to fully express one's power, to go to the limits of one's potential, rather than to judge what exists by non-empirical, transcendent standards. Modern society still suppresses difference and alienates persons from what they can do. To affirm reality, which is a flux of change and difference, we must overturn established identities and so become all that we can become—though we cannot know what that is in advance. The pinnacle of Deleuzean practice, then, is creativity. "Herein, perhaps, lies the secret: to bring into existence and not to judge. If it is so disgusting to judge, it is not because everything is of equal value, but on the contrary because what has value can be made or distinguished only by defying judgment. What expert judgment, in art, could ever bear on the work to come?" Deleuze's studies of individual philosophers and artists are purposely heterodox. In "Nietzsche and Philosophy", for example, Deleuze claims that Nietzsche's "On the Genealogy of Morality" (1887) is an attempt to rewrite Kant's "Critique of Pure Reason" (1781), even though Nietzsche nowhere mentions the First Critique in the "Genealogy", and the "Genealogy"'s moral topics are far removed from the epistemological focus of Kant's book. Likewise, Deleuze claims that univocity is the organizing principle of Spinoza's philosophy, despite the total absence of the term from any of Spinoza's works. Deleuze once famously described his method of interpreting philosophers as "buggery ("enculage")", as sneaking behind an author and producing an offspring which is recognizably his, yet also monstrous and different. The various monographs thus are not attempts to present what Nietzsche or Spinoza strictly intended, but re-stagings of their ideas in different and unexpected ways. Deleuze's peculiar readings aim to enact the creativity he believes is the acme of philosophical practice. A parallel in painting Deleuze points to is Francis Bacon's "Study after Velázquez"—it is quite beside the point to say that Bacon "gets Velasquez wrong". Similar considerations apply, in Deleuze's view, to his own uses of mathematical and scientific terms, "pace" critics such as Alan Sokal: "I'm not saying that Resnais and Prigogine, or Godard and Thom, are doing the same thing. I'm pointing out, rather, that there are remarkable similarities between scientific creators of functions and cinematic creators of images. And the same goes for philosophical concepts, since there are distinct concepts of these spaces." Along with several French and Italian Marxist-inspired thinkerslike Louis Althusser,
https://en.wikipedia.org/wiki?curid=12557
Galaxy A galaxy is a gravitationally bound system of stars, stellar remnants, interstellar gas, dust, and dark matter. The word galaxy is derived from the Greek "" (), literally "milky", a reference to the Milky Way. Galaxies range in size from dwarfs with just a few hundred million () stars to giants with one hundred trillion () stars, each orbiting its galaxy's center of mass. Galaxies are categorized according to their visual morphology as elliptical, spiral, or irregular. Many galaxies are thought to have supermassive black holes at their centers. The Milky Way's central black hole, known as Sagittarius A*, has a mass four million times greater than the Sun. As of March 2016, GN-z11 is the oldest and most distant observed galaxy with a comoving distance of 32 billion light-years from Earth, and observed as it existed just 400 million years after the Big Bang. Research released in 2016 revised the number of galaxies in the observable universe from a previous estimate of 200 billion () to a suggested two trillion () or more and, overall, as many as an estimated stars (more stars than all the grains of sand on planet Earth). Most of the galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3,000 to 300,000 light years) and separated by distances on the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 30,000 parsecs (100,000 ly) and is separated from the Andromeda Galaxy, its nearest large neighbor, by 780,000 parsecs (2.5 million ly.) The space between galaxies is filled with a tenuous gas (the intergalactic medium) having an average density of less than one atom per cubic meter. The majority of galaxies are gravitationally organized into groups, clusters, and superclusters. The Milky Way is part of the Local Group, which is dominated by it and the Andromeda Galaxy and is part of the Virgo Supercluster. At the largest scale, these associations are generally arranged into sheets and filaments surrounded by immense voids. Both the Local Group and the Virgo Supercluster are contained in a much larger cosmic structure named Laniakea. The word "galaxy" was borrowed via French and Medieval Latin from the Greek term for the Milky Way, "" () 'milky (circle)', named after its appearance as a milky band of light in the sky. In Greek mythology, Zeus places his son born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so the baby will drink her divine milk and thus become immortal. Hera wakes up while breastfeeding and then realizes she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the faint band of light known as the Milky Way. In the astronomical literature, the capitalized word "Galaxy" is often used to refer to our galaxy, the Milky Way, to distinguish it from the other galaxies in our universe. The English term "Milky Way" can be traced back to a story by Chaucer : Galaxies were initially discovered telescopically and were known as "spiral nebulae". Most 18th to 19th Century astronomers considered them as either unresolved star clusters or anagalactic nebulae, and were just thought as a part of the Milky Way, but their true composition and natures remained a mystery. Observations using larger telescopes of a few nearby bright galaxies, like the Andromeda Galaxy, began resolving them into huge conglomerations of stars, but based simply on the apparent faintness and sheer population of stars, the true distances of these objects placed them well beyond the Milky Way. For this reason they were popularly called "island universes", but this term quickly fell into disuse, as the word "universe" implied the entirety of existence. Instead, they became known simply as galaxies. Tens of thousands of galaxies have been catalogued, but only a few have well-established names, such as the Andromeda Galaxy, the Magellanic Clouds, the Whirlpool Galaxy, and the Sombrero Galaxy. Astronomers work with numbers from certain catalogues, such as the Messier catalogue, the NGC (New General Catalogue), the IC (Index Catalogue), the CGCG (Catalogue of Galaxies and of Clusters of Galaxies), the MCG (Morphological Catalogue of Galaxies) and UGC (Uppsala General Catalogue of Galaxies). All the well-known galaxies appear in one or more of these catalogues but each time under a different number. For example, Messier 109 is a spiral galaxy having the number 109 in the catalogue of Messier, and also having the designations NGC 3992, UGC 6937, CGCG 269-023, MCG +09-20-044, and PGC 37617. The realization that we live in a galaxy which is one among many galaxies, parallels major discoveries that were made about the Milky Way and other nebulae. The Greek philosopher Democritus (450–370 BCE) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars. Aristotle (384–322 BCE), however, believed the Milky Way to be caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." The Neoplatonist philosopher Olympiodorus the Younger (–570 CE) was critical of this view, arguing that if the Milky Way is sublunary (situated between Earth and the Moon) it should appear different at different times and places on Earth, and that it should have parallax, which it does not. In his view, the Milky Way is celestial. According to Mohani Mohamed, the Arabian astronomer Alhazen (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it must be remote from the Earth, not belonging to the atmosphere." The Persian astronomer al-Bīrūnī (973–1048) proposed the Milky Way galaxy to be "a collection of countless fragments of the nature of nebulous stars." The Andalusian astronomer Ibn Bâjjah ("Avempace", 1138) proposed that the Milky Way is made up of many stars that almost touch one another and appear to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects are near. In the 14th century, the Syrian-born Ibn Qayyim proposed the Milky Way galaxy to be "a myriad of tiny stars packed together in the sphere of the fixed stars." Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study the Milky Way and discovered that it is composed of a huge number of faint stars. In 1750 the English astronomer Thomas Wright, in his "An Original Theory or New Hypothesis of the Universe", speculated (correctly) that the galaxy might be a rotating body of a huge number of stars held together by gravitational forces, akin to the Solar System but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from our perspective inside the disk. In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way. The first project to describe the shape of the Milky Way and the position of the Sun was undertaken by William Herschel in 1785 by counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the Solar System close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the center. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane, but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of our host galaxy, the Milky Way, emerged. A few galaxies outside the Milky Way are visible on a dark night to the unaided eye, including the Andromeda Galaxy, Large Magellanic Cloud, the Small Magellanic Cloud, and the Triangulum Galaxy. In the 10th century, the Persian astronomer Al-Sufi made the earliest recorded identification of the Andromeda Galaxy, describing it as a "small cloud". In 964, Al-Sufi probably mentioned the Large Magellanic Cloud in his "Book of Fixed Stars" (referring to "Al Bakr of the southern Arabs", since at a declination of about 70° south it was not visible where he lived); it was not well known to Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was later independently noted by Simon Marius in 1612. In 1734, philosopher Emanuel Swedenborg in his "Principia" speculated that there may be galaxies outside our own that are formed into galactic clusters that are minuscule parts of the universe which extends far beyond what we can see. These views "are remarkably close to the present-day views of the cosmos." In 1745, Pierre Louis Maupertuis conjectured that some nebula-like objects are collections of stars with unique properties, including a glow exceeding the light its stars produce on their own, and repeated Johannes Hevelius's view that the bright spots are massive and flattened due to their rotation. In 1750, Thomas Wright speculated (correctly) that the Milky Way is a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways. Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest celestial objects having nebulous appearance. Subsequently, William Herschel assembled a catalog of 5,000 nebulae. In 1845, Lord Rosse constructed a new telescope and was able to distinguish between elliptical and spiral nebulae. He also managed to make out individual point sources in some of these nebulae, lending credence to Kant's earlier conjecture. In 1912, Vesto Slipher made spectrographic studies of the brightest spiral nebulae to determine their composition. Slipher discovered that the spiral nebulae have high Doppler shifts, indicating that they are moving at a rate exceeding the velocity of the stars he had measured. He found that the majority of these nebulae are moving away from us. In 1917, Heber Curtis observed nova S Andromedae within the "Great Andromeda Nebula" (as the Andromeda Galaxy, Messier object M31, was then known). Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within our galaxy. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies. In 1920 a debate took place between Harlow Shapley and Heber Curtis (the Great Debate), concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift. In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100 inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1936 Hubble produced a classification of galactic morphology that is used to this day. In 1944, Hendrik van de Hulst predicted that microwave radiation with wavelength of 21 cm would be detectable from interstellar atomic hydrogen gas; and in 1951 it was observed. This radiation is not affected by dust absorption, and so its Doppler shift can be used to map the motion of the gas in our galaxy. These observations led to the hypothesis of a rotating bar structure in the center of our galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies. In the 1970s, Vera Rubin uncovered a discrepancy between observed galactic rotation speed and that predicted by the visible mass of stars and gas. Today, the galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter. Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, Hubble data helped establish that the missing dark matter in our galaxy cannot solely consist of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion () galaxies in the observable universe. Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allow detection of other galaxies which are not detected by Hubble. Particularly, galaxy surveys in the Zone of Avoidance (the region of the sky blocked at visible-light wavelengths by the Milky Way) have revealed a number of new galaxies. In 2016, a study published in "The Astrophysical Journal" and led by Christopher Conselice of the University of Nottingham using 3D modeling of images collected over 20 years by the Hubble Space Telescope concluded that there are more than two trillion () galaxies in the observable universe. Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type (shape), it may miss certain important characteristics of galaxies such as star formation rate in starburst galaxies and activity in the cores of active galaxies. The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently, these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters. The largest galaxies are giant ellipticals. Many elliptical galaxies are believed to form due to the interaction of galaxies, resulting in a collision and merger. They can grow to enormous sizes (compared to spiral galaxies, for example), and giant elliptical galaxies are often found near the core of large galaxy clusters. A shell galaxy is a type of elliptical galaxy where the stars in the galaxy's halo are arranged in concentric shells. About one-tenth of elliptical galaxies have a shell-like structure, which has never been observed in spiral galaxies. The shell-like structures are thought to develop when a larger galaxy absorbs a smaller companion galaxy. As the two galaxy centers approach, the centers start to oscillate around a center point, the oscillation creates gravitational ripples forming the shells of stars, similar to ripples spreading on water. For example, galaxy NGC 3923 has over twenty shells. Spiral galaxies resemble spiraling pinwheels. Though the stars and other visible material contained in such a galaxy lie mostly on a plane, the majority of mass in spiral galaxies exists in a roughly spherical halo of dark matter which extends beyond the visible component, as demonstrated by the universal rotation curve concept. Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type "S", followed by a letter ("a", "b", or "c") which indicates the degree of tightness of the spiral arms and the size of the central bulge. An "Sa" galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an "Sc" galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. The speed in which a galaxy rotates is thought to correlate with the flatness of the disc as some spiral galaxies have thick bulges, while others are thin and dense. In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars. A majority of spiral galaxies, including our own Milky Way galaxy, have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an "SB", followed by a lower-case letter ("a", "b" or "c") which indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms. Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×1011) stars and has a total mass of about six hundred billion (6×1011) times the mass of the Sun. Recently, researchers described galaxies called super-luminous spirals. They are very large with an upward diameter of 437,000 light-years (compared to the Milky Way's 100,000 light-year diameter). With a mass of 340 billion solar masses, they generate a significant amount of ultraviolet and mid-infrared light. They are thought to have an increased star formation rate around 30 times faster than the Milky Way. Despite the prominence of large elliptical and spiral galaxies, most galaxies are dwarf galaxies. These galaxies are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, containing only a few billion stars. Ultra-compact dwarf galaxies have recently been discovered that are only 100 parsecs across. Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered. Dwarf galaxies may also be classified as elliptical, spiral, or irregular. Since small dwarf ellipticals bear little resemblance to large ellipticals, they are often called dwarf spheroidal galaxies instead. A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether the galaxy has thousands or millions of stars. This has led to the suggestion that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale. Interactions between galaxies are relatively frequent, and they can play an important role in galactic evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust. Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars of interacting galaxies will usually not collide, but the gas and dust within the two forms will interact, sometimes triggering star formation. A collision can severely distort the shape of the galaxies, forming bars, rings or tail-like structures. At the extreme of interactions are galactic mergers. In this case the relative momentum of the two galaxies is insufficient to allow the galaxies to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to morphology, as compared to the original galaxies. If one of the merging galaxies is much more massive than the other merging galaxy then the result is known as cannibalism. The more massive larger galaxy will remain relatively undisturbed by the merger, while the smaller galaxy is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy. Stars are created within galaxies from a reserve of cold gas that forms into giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, which is known as a starburst. If they continue to do so, then they would consume their reserve of gas in a time span less than the lifespan of the galaxy. Hence starburst activity usually lasts only about ten million years, a relatively brief period in the history of a galaxy. Starburst galaxies were more common during the early history of the universe, and, at present, still contribute an estimated 15% to the total star production rate. Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These massive stars produce supernova explosions, resulting in expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the starburst activity end. Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity. A portion of the observable galaxies are classified as active galaxies if the galaxy contains an active galactic nucleus (AGN). A significant portion of the total energy output from the galaxy is emitted by the active galactic nucleus, instead of the stars, dust and interstellar medium of the galaxy. The standard model for an active galactic nucleus is based upon an accretion disc that forms around a supermassive black hole (SMBH) at the core region of the galaxy. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. In about 10% of these galaxies, a diametrically opposed pair of energetic jets ejects particles from the galaxy core at velocities close to the speed of light. The mechanism for producing these jets is not well understood. Blazars are believed to be an active galaxy with a relativistic jet that is pointed in the direction of Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the viewing angle of the observer. Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. The excitation sources for the weakly ionized lines include post-AGB stars, AGN, and shocks. Approximately one-third of nearby galaxies are classified as containing LINER nuclei. Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses but unlike quasars, their host galaxies are clearly detectable. Seyfert galaxies account for about 10% of all galaxies. Seen in visible light, most Seyfert galaxies look like normal spiral galaxies, but when studied under other wavelengths, the luminosity of their cores is equivalent to the luminosity of whole galaxies the size of the Milky Way. Quasars (/ˈkweɪzɑr/) or quasi-stellar radio sources are the most energetic and distant members of active galactic nuclei. Quasars are extremely luminous and were first identified as being high redshift sources of electromagnetic energy, including radio waves and visible light, that appeared to be similar to stars rather than extended sources similar to galaxies. Their luminosity can be 100 times that of the Milky Way. Luminous infrared galaxies or LIRGs are galaxies with luminosities, the measurement of brightness, above 1011 L☉. LIRGs are more abundant than starburst galaxies, Seyfert galaxies and quasi-stellar objects at comparable total luminosity. Infrared galaxies emit more energy in the infrared than at all other wavelengths combined. LIRGs are 100 billion times brighter than our Sun. Galaxies have magnetic fields of their own. They are strong enough to be dynamically important: they drive mass inflow into the centers of galaxies, they modify the formation of spiral arms and they can affect the rotation of gas in the outer regions of galaxies. Magnetic fields provide the transport of angular momentum required for the collapse of gas clouds and hence the formation of new stars. The typical average equipartition strength for spiral galaxies is about 10 μG (microGauss) or 1nT (nanoTesla). For comparison, the Earth's magnetic field has an average strength of about 0.3 G (Gauss or 30 μT (microTesla). Radio-faint galaxies like M 31 and M 33, our Milky Way's neighbors, have weaker fields (about 5μG), while gas-rich galaxies with high star-formation rates, like M 51, M 83 and NGC 6946, have 15 μG on average. In prominent spiral arms the field strength can be up to 25 μG, in regions where cold gas and dust are also concentrated. The strongest total equipartition fields (50–100 μG) were found in starburst galaxies, for example in M 82 and the Antennae, and in nuclear starburst regions, for example in the centers of NGC 1097 and of other barred galaxies. Galactic formation and evolution is an active area of research in astrophysics. Current cosmological models of the early universe are based on the Big Bang theory. About 300,000 years after this event, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result, this period has been called the "dark ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures would eventually become the galaxies we see today. Evidence for the early appearance of galaxies was found in 2006, when it was discovered that the galaxy IOK-1 has an unusually high redshift of 6.96, corresponding to just 750 million years after the Big Bang and making it the most distant and primordial galaxy yet seen. While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the universe's evolution), IOK-1's age and composition have been more reliably established. In December 2012, astronomers reported that UDFj-39546284 is the most distant object known and has a redshift value of 11.9. The object, estimated to have existed around "380 million years" after the Big Bang (which was about 13.8 billion years ago), is about 13.42 billion light travel distance years away. The existence of such early protogalaxies suggests they must have grown in the so-called "dark ages". As of May 5, 2015, the galaxy EGS-zs8-1 is the most distant and earliest galaxy measured, forming 670 million years after the Big Bang. The light from EGS-zs8-1 has taken 13 billion years to reach Earth, and is now 30 billion light-years away, because of the expansion of the universe during 13 billion years. The detailed process by which early galaxies formed is an open question in astrophysics. Theories can be divided into two categories: top-down and bottom-up. In top-down correlations (such as the Eggen–Lynden-Bell–Sandage [ELS] model), protogalaxies form in a large-scale simultaneous collapse lasting about one hundred million years. In bottom-up theories (such as the Searle-Zinn [SZ] model), small structures such as globular clusters form first, and then a number of such bodies accrete to form a larger galaxy. Once protogalaxies began to form and contract, the first halo stars (called Population III stars) appeared within them. These were composed almost entirely of hydrogen and helium, and may have been massive. If so, these huge stars would have quickly consumed their supply of fuel and became supernovae, releasing heavy elements into the interstellar medium. This first generation of stars re-ionized the surrounding neutral hydrogen, creating expanding bubbles of space through which light could readily travel. In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at . Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it. Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation. During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets. The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies. The Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two might collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, evidence of past collisions of the Milky Way with smaller dwarf galaxies is increasing. Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked about ten billion years ago. Spiral galaxies, like the Milky Way, produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are largely devoid of this gas, and so form few new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end. The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013–1014 years), as the smallest, longest-lived stars in our universe, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions. Deep sky surveys show that galaxies are often found in groups and clusters. Solitary galaxies that have not significantly interacted with another galaxy of comparable mass during the past billion years are relatively scarce. Only about five percent of the galaxies surveyed have been found to be truly isolated; however, these isolated formations may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller, satellite galaxies. Isolated galaxies can produce stars at a higher rate than normal, as their gas is not being stripped by other nearby galaxies. On the largest scale, the universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This on-going merger process (as well as an influx of infalling gas) heats the inter-galactic gas within a cluster to very high temperatures, reaching 30–100 megakelvins. About 70–80% of the mass in a cluster is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent of the matter in the form of galaxies. Most galaxies are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchical distribution of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster, and these formations contain a majority of the galaxies (as well as most of the baryonic mass) in the universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers. Clusters of galaxies consist of hundreds to thousands of galaxies bound together by gravity. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own. Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the universe appears to be the same in all directions (isotropic and homogeneous)., though this notion has been challenged in recent years by numerous findings of large-scale structures that appear to be exceeding this scale. The Hercules-Corona Borealis Great Wall, currently the largest structure in the universe found so far, is 10 billion light-years (three gigaparsecs) in length. The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. And the Virgo Supercluster itself is a part of the Pisces-Cetus Supercluster Complex, a giant galaxy filament. The peak radiation of most stars lies in the visible spectrum, so the observation of the stars that form galaxies has been a major component of optical astronomy. It is also a favorable portion of the spectrum for observing ionized H II regions, and for examining the distribution of dusty arms. The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy. The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The Earth's atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. (The ionosphere blocks signals below this range.) Large radio interferometers have been used to map the active jets emitted from active nuclei. Radio telescopes can also be used to observe neutral hydrogen (via 21 cm radiation), including, potentially, the non-ionized matter in the early universe that later collapsed to form galaxies. Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. Ultraviolet flares are sometimes observed when a star in a distant galaxy is torn apart from the tidal forces of a nearby black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of supermassive black holes at the cores of galaxies was confirmed through X-ray astronomy.
https://en.wikipedia.org/wiki?curid=12558
Gene Hackman Eugene Allen Hackman (born January 30, 1930) is a retired American actor and novelist. In a career that spanned more than six decades, Hackman won two Academy Awards, four Golden Globes, one Screen Actors Guild Award, and two BAFTAs. Nominated for five Academy Awards, Hackman won Best Actor for his role as Jimmy "Popeye" Doyle in the critically acclaimed thriller "The French Connection" (1971), and Best Supporting Actor as "Little" Bill Daggett in the Clint Eastwood Western "Unforgiven" (1992). His other nominations for Best Supporting Actor came with the films "Bonnie and Clyde" (1967) and "I Never Sang for My Father" (1970), with a second Best Actor nomination for "Mississippi Burning" (1988). Hackman's other major film roles included "The Poseidon Adventure" (1972), "The Conversation" (1974), "French Connection II" (1975), "A Bridge Too Far" (1977), "Superman: The Movie" (1978)—as arch-villain Lex Luthor, "Hoosiers" (1986), "The Firm" (1993), "The Quick and the Dead" (1995), "Crimson Tide" (1995), "Enemy of the State" (1998), "Antz" (1998), "The Replacements" (2000), "Behind Enemy Lines" (2001), "The Royal Tenenbaums" (2001), and "Welcome to Mooseport" (2004)—his final film role before his retirement. Hackman was born in San Bernardino, California, the son of Eugene Ezra Hackman and Anna Lyda Elizabeth (née Gray). He has one brother, Richard. He has Pennsylvania Dutch (German), English, and Scottish ancestry; his mother was Canadian, and was born in Lambton, Ontario. His family moved frequently, finally settling in Danville, Illinois, where they lived in the house of his English-born maternal grandmother, Beatrice. Hackman's father operated the printing press for the "Commercial-News", a local paper. His parents divorced in 1943 and his father subsequently left the family. Hackman decided that he wanted to become an actor when he was ten years old. Hackman lived briefly in Storm Lake, Iowa, and spent his sophomore year at Storm Lake High School. He left home at age 16 and lied about his age to enlist in the United States Marine Corps. He served four and a half years as a field radio operator. He was stationed in China (Qingdao and later in Shanghai). When the Communist Revolution conquered the mainland in 1949, Hackman was assigned to Hawaii and Japan. Following his discharge in 1951, he moved to New York and had several jobs. His mother died in 1962 as a result of a fire she accidentally started while smoking. He began a study of journalism and television production at the University of Illinois under the G.I. Bill, but left and moved to California. In 1956 Hackman began pursuing an acting career. He joined the Pasadena Playhouse in California, where he befriended another aspiring actor, Dustin Hoffman. Already seen as outsiders by their classmates, Hackman and Hoffman were voted "The Least Likely To Succeed", and Hackman got the lowest score the Pasadena Playhouse had yet given. Determined to prove them wrong, Hackman moved to New York City. A 2004 article in "Vanity Fair" described Hackman, Hoffman and Robert Duvall as struggling California-born actors and close friends, sharing NYC apartments in various two-person combinations in the 1960s. To support himself between acting jobs, Hackman was working as a uniformed doorman at a Howard Johnson restaurant when, as luck would have it, he encountered an instructor from the Pasadena Playhouse, who told him, "See, Hackman—I told you you wouldn't amount to anything." From then on, Hackman was determined to become the finest actor he possibly could. (The three former roommates have since been nominated for 19 Academy Awards, and won five.) Hackman got various bit roles, for example on the TV series "Route 66" in 1963, and began performing in several Off-Broadway plays. In 1964 he had an offer to co-star in the play "Any Wednesday" with actress Sandy Dennis. This opened the door to film work. His first role was in "Lilith", with Warren Beatty in the leading role. In 1967 he appeared in an episode of the television series "The Invaders" entitled "The Spores". Another supporting role, Buck Barrow in 1967's "Bonnie and Clyde", earned him an Academy Award nomination as Best Supporting Actor. In 1968 he appeared in an episode of "I Spy", in the role of "Hunter", in the episode "Happy Birthday... Everybody". That same year he starred in the "CBS Playhouse" episode "My Father and My Mother" and the dystopian television film "Shadow on the Land". In 1969 he played a ski coach in "Downhill Racer" and an astronaut in "Marooned". Also that year, he played a member of a barnstorming skydiving team that entertained mostly at county fairs, a movie which also inspired many to pursue skydiving and has a cult-like status amongst skydivers as a result: "The Gypsy Moths". He nearly accepted the role of Mike Brady for the TV series, "The Brady Bunch", but was advised by his agent to decline in exchange for a more promising role, which he did. Hackman was nominated for a second Best Supporting Actor Academy Award for his role in "I Never Sang for My Father" (1970). He won the Academy Award for Best Actor for his performance as New York City Detective Jimmy "Popeye" Doyle in "The French Connection" (1971), marking his graduation to leading man status. He followed this with leading roles in the disaster film "The Poseidon Adventure" (1972) and Francis Ford Coppola's "The Conversation" (1974), which was nominated for several Oscars. That same year, Hackman appeared in what became one of his most famous comedic roles as The Blindman in "Young Frankenstein". He appeared as one of Teddy Roosevelt's former Rough Riders in the Western horse-race saga "Bite the Bullet" (1975). He reprised his Oscar winning role as Doyle in the sequel "French Connection II" (1975), and was part of an all-star cast in the war film "A Bridge Too Far" (1977), playing Polish General Stanisław Sosabowski. Hackman showed a talent for both comedy and the "slow burn" as criminal mastermind Lex Luthor in "Superman: The Movie" (1978), a role he would reprise in its 1980 and 1987 sequels. Hackman alternated between leading and supporting roles during the 1980s, with prominent roles in "Reds" (1981)—directed by and starring Warren Beatty—"Under Fire" (1983), "Hoosiers" (1986) (which an American Film Institute poll in 2008 voted the fourth-greatest film of all time in the sports genre), "No Way Out" (1987) and "Mississippi Burning" (1988), where he was nominated for a second Best Actor Oscar. Between 1985 and 1988, he starred in 9 films, making him the busiest actor alongside Steve Guttenberg. Hackman appeared with Anne Archer in "Narrow Margin" (1990), a remake of the 1952 film "The Narrow Margin". In 1992, he played the sadistic sheriff "Little" Bill Daggett in the Western "Unforgiven" directed by Clint Eastwood and written by David Webb Peoples. Hackman had pledged to avoid violent roles, but Eastwood convinced him to take the part, which earned him a second Oscar, this time for Best Supporting Actor. The film also won Best Picture. In 1993 he appeared in "" as Brigadier General George Crook, and co-starred with Tom Cruise as a corrupt lawyer in "The Firm", a legal thriller based on the John Grisham novel of the same name. Hackman would appear in a second film based on a John Grisham novel, playing a convict on death row in "The Chamber" (1996). Other notable films Hackman appeared in during the 1990s include "Wyatt Earp" (1994) (as Nicholas Porter Earp, Wyatt Earp's father), "The Quick and the Dead" (1995) opposite Sharon Stone, Leonardo DiCaprio and Russell Crowe, and as submarine Captain Frank Ramsey alongside Denzel Washington in "Crimson Tide" (1995). Hackman played movie director Harry Zimm with John Travolta in the comedy-drama Get Shorty in 1995. He reunited with Clint Eastwood in "Absolute Power" (1997), and co-starred with Will Smith in "Enemy of the State" (1998), his character reminiscent of the one he had portrayed in "The Conversation". In 1996, he took a comedic turn as conservative Senator Kevin Keeley in "The Birdcage" with Robin Williams and Nathan Lane. Hackman co-starred with Owen Wilson in "Behind Enemy Lines" (2001), and appeared in the David Mamet crime thriller "Heist" (2001), as an aging professional thief of considerable skill who is forced into one final job. He also gained much critical acclaim playing against type as the head of an eccentric family in Wes Anderson's comedy film "The Royal Tenenbaums" (2001). He received a Golden Globe Award nomination for Best Actor in a Motion Picture Musical or Comedy. In 2003, he also starred in another John Grisham legal drama, "Runaway Jury" at long last getting to make a picture with his long-time friend Dustin Hoffman. In 2004, Hackman appeared alongside Ray Romano in the comedy "Welcome to Mooseport", his final film acting role to date. Hackman was honored with the Cecil B. DeMille Award from the Golden Globe Awards for his "outstanding contribution to the entertainment field" in 2003. On July 7, 2004, Hackman gave a rare interview to Larry King, where he announced that he had no future film projects lined up and believed his acting career was over. In 2008, while promoting his third novel, he confirmed that he had retired from acting. When asked during a "GQ" interview in 2011 if he would ever come out of retirement to do one more film, he said he might consider it "if I could do it in my own house, maybe, without them disturbing anything and just one or two people." In 2016 he narrated the Smithsonian Channel documentary "The Unknown Flag Raiser of Iwo Jima." Together with undersea archaeologist Daniel Lenihan, Hackman has written three historical fiction novels: "Wake of the Perdido Star" (1999), a sea adventure of the 19th century; "Justice for None" (2004), a Depression-era tale of murder; and "Escape from Andersonville" (2008) about a prison escape during the American Civil War. His first solo effort, a story of love and revenge set in the Old West titled "Payback at Morning Peak", was released in 2011. A police thriller, "Pursuit", followed in 2013. In 2011 he appeared on the Fox Sports Radio show "The Loose Cannons", where he discussed his career and novels with Pat O'Brien, Steve Hartman, and Vic "The Brick" Jacobs. Hackman's first marriage was to Faye Maltese. They had three children: Christopher Allen, Elizabeth Jean, and Leslie Anne Hackman. The couple divorced in 1986 after three decades of marriage. In 1991 he married Betsy Arakawa. They have a home in Santa Fe, New Mexico. In the late 1970s, Hackman competed in Sports Car Club of America races driving an open wheeled Formula Ford. In 1983 he drove a Dan Gurney Team Toyota in the 24 Hours of Daytona Endurance Race. He also won the Long Beach Grand Prix Celebrity Race. Hackman underwent an angioplasty in 1990. He is an avid fan of the Jacksonville Jaguars and regularly attended Jaguars games as a guest of then-head coach Jack Del Rio. He is friends with Del Rio from Del Rio's playing days at the University of Southern California. In January 2012 the then 81-year-old Hackman was riding a bicycle in the Florida Keys when he was struck by a car. Asteroid 55397 Hackman, discovered by Roy Tucker in 2001, was named in his honor. The official was published by the Minor Planet Center on 18 May 2019 ().
https://en.wikipedia.org/wiki?curid=12561
Gregor Mendel Gregor Johann Mendel (; ; 20 July 1822 – 6 January 1884) was a scientist, Augustinian friar and abbot of St. Thomas' Abbey in Brno, Margraviate of Moravia. Mendel was born in a German-speaking family in the Silesian part of the Austrian Empire (today's Czech Republic) and gained posthumous recognition as the founder of the modern science of genetics. Though farmers had known for millennia that crossbreeding of animals and plants could favor certain desirable traits, Mendel's pea plant experiments conducted between 1856 and 1863 established many of the rules of heredity, now referred to as the laws of Mendelian inheritance. Mendel worked with seven characteristics of pea plants: plant height, pod shape and color, seed shape and color, and flower position and color. Taking seed color as an example, Mendel showed that when a true-breeding yellow pea and a true-breeding green pea were cross-bred their offspring always produced yellow seeds. However, in the next generation, the green peas reappeared at a ratio of 1 green to 3 yellow. To explain this phenomenon, Mendel coined the terms "recessive" and "dominant" in reference to certain traits. In the preceding example, the green trait, which seems to have vanished in the first filial generation, is recessive and the yellow is dominant. He published his work in 1866, demonstrating the actions of invisible "factors"—now called genes—in predictably determining the traits of an organism. The profound significance of Mendel's work was not recognized until the turn of the 20th century (more than three decades later) with the rediscovery of his laws. Erich von Tschermak, Hugo de Vries, Carl Correns and William Jasper Spillman independently verified several of Mendel's experimental findings, ushering in the modern age of genetics. Mendel was born into a German-speaking family in Hynčice ("Heinzendorf bei Odrau" in German), at the Moravian-Silesian border, Austrian Empire (now a part of the Czech Republic). He was the son of Anton and Rosine (Schwirtlich) Mendel and had one older sister, Veronika, and one younger, Theresia. They lived and worked on a farm which had been owned by the Mendel family for at least 130 years (the house where Mendel was born is now a museum devoted to Mendel). During his childhood, Mendel worked as a gardener and studied beekeeping. As a young man, he attended gymnasium in Opava (called "Troppau" in German). He had to take four months off during his gymnasium studies due to illness. From 1840 to 1843, he studied practical and theoretical philosophy and physics at the Philosophical Institute of the University of Olomouc, taking another year off because of illness. He also struggled financially to pay for his studies, and Theresia gave him her dowry. Later he helped support her three sons, two of whom became doctors. He became a monk in part because it enabled him to obtain an education without having to pay for it himself. As the son of a struggling farmer, the monastic life, in his words, spared him the "perpetual anxiety about a means of livelihood." He was given the name Gregor ("Řehoř" in Czech) when he joined the Augustinian monks. When Mendel entered the Faculty of Philosophy, the Department of Natural History and Agriculture was headed by Johann Karl Nestler who conducted extensive research of hereditary traits of plants and animals, especially sheep. Upon recommendation of his physics teacher Friedrich Franz, Mendel entered the Augustinian St Thomas's Abbey in Brno (called "Brünn" in German) and began his training as a priest. Born Johann Mendel, he took the name Gregor upon entering religious life. Mendel worked as a substitute high school teacher. In 1850, he failed the oral part, the last of three parts, of his exams to become a certified high school teacher. In 1851, he was sent to the University of Vienna to study under the sponsorship of Abbot so that he could get more formal education. At Vienna, his professor of physics was Christian Doppler. Mendel returned to his abbey in 1853 as a teacher, principally of physics. In 1856, he took the exam to become a certified teacher and again failed the oral part. In 1867, he replaced Napp as abbot of the monastery. After he was elevated as abbot in 1868, his scientific work largely ended, as Mendel became overburdened with administrative responsibilities, especially a dispute with the civil government over its attempt to impose special taxes on religious institutions. Mendel died on 6 January 1884, at the age of 61, in Brno, Moravia, Austria-Hungary (now Czech Republic), from chronic nephritis. Czech composer Leoš Janáček played the organ at his funeral. After his death, the succeeding abbot burned all papers in Mendel's collection, to mark an end to the disputes over taxation. Gregor Mendel, who is known as the "father of modern genetics", was inspired by both his professors at the Palacký University, Olomouc (Friedrich Franz and Johann Karl Nestler), and his colleagues at the monastery (such as Franz Diebl) to study variation in plants. In 1854, Napp authorized Mendel to carry out a study in the monastery's experimental garden, which was originally planted by Napp in 1830. Unlike Nestler, who studied hereditary traits in sheep, Mendel used the common edible pea and started his experiments in 1856. After initial experiments with pea plants, Mendel settled on studying seven traits that seemed to be inherited independently of other traits: seed shape, flower color, seed coat tint, pod shape, unripe pod color, flower location, and plant height. He first focused on seed shape, which was either angular or round. Between 1856 and 1863 Mendel cultivated and tested some 28,000 plants, the majority of which were pea plants ("Pisum sativum"). This study showed that, when true-breeding different varieties were crossed to each other (e.g., tall plants fertilized by short plants), in the second generation, one in four pea plants had purebred recessive traits, two out of four were hybrids, and one out of four were purebred dominant. His experiments led him to make two generalizations, the Law of Segregation and the Law of Independent Assortment, which later came to be known as Mendel's Laws of Inheritance. Mendel presented his paper, ""Versuche über Pflanzenhybriden"" ("Experiments on Plant Hybridization"), at two meetings of the Natural History Society of Brno in Moravia on 8 February and 8 March 1865. It generated a few favorable reports in local newspapers, but was ignored by the scientific community. When Mendel's paper was published in 1866 in "Verhandlungen des naturforschenden Vereines in Brünn", it was seen as essentially about hybridization rather than inheritance, had little impact, and was only cited about three times over the next thirty-five years. His paper was criticized at the time, but is now considered a seminal work. Notably, Charles Darwin was not aware of Mendel's paper, and it is envisaged that if he had been aware of it, genetics as it exists now might have taken hold much earlier. Mendel's scientific biography thus provides an example of the failure of obscure, highly original innovators to receive the attention they deserve. Mendel began his studies on heredity using mice. He was at St. Thomas's Abbey but his bishop did not like one of his friars studying animal sex, so Mendel switched to plants. Mendel also bred bees in a bee house that was built for him, using bee hives that he designed. He also studied astronomy and meteorology, founding the 'Austrian Meteorological Society' in 1865. The majority of his published works were related to meteorology. Mendel also experimented with hawkweed ("Hieracium") and honeybees. He published a report on his work with hawkweed, a group of plants of great interest to scientists at the time because of their diversity. However, the results of Mendel's inheritance study in hawkweeds was unlike his results for peas; the first generation was very variable and many of their offspring were identical to the maternal parent. In his correspondence with Carl Nägeli he discussed his results but was unable to explain them. It was not appreciated until the end of the nineteen century that many hawkweed species were apomictic, producing most of their seeds through an asexual process. None of his results on bees survived, except for a passing mention in the reports of Moravian Apiculture Society. All that is known definitely is that he used Cyprian and Carniolan bees, which were particularly aggressive to the annoyance of other monks and visitors of the monastery such that he was asked to get rid of them. Mendel, on the other hand, was fond of his bees, and referred to them as "my dearest little animals". He also described novel plant species, and these are denoted with the botanical author abbreviation "Mendel". About forty scientists listened to Mendel's two groundbreaking lectures, but it would appear that they failed to understand his work. Later, he also carried on a correspondence with Carl Nägeli, one of the leading biologists of the time, but Nägeli too failed to appreciate Mendel's discoveries. At times, Mendel must have entertained doubts about his work, but not always: "My time will come," he reportedly told a friend. During Mendel's lifetime, most biologists held the idea that all characteristics were passed to the next generation through blending inheritance, in which the traits from each parent are averaged. Instances of this phenomenon are now explained by the action of multiple genes with quantitative effects. Charles Darwin tried unsuccessfully to explain inheritance through a theory of pangenesis. It was not until the early 20th century that the importance of Mendel's ideas was realized. By 1900, research aimed at finding a successful theory of discontinuous inheritance rather than blending inheritance led to independent duplication of his work by Hugo de Vries and Carl Correns, and the rediscovery of Mendel's writings and laws. Both acknowledged Mendel's priority, and it is thought probable that de Vries did not understand the results he had found until after reading Mendel. Though Erich von Tschermak was originally also credited with rediscovery, this is no longer accepted because he did not understand Mendel's laws. Though de Vries later lost interest in Mendelism, other biologists started to establish modern genetics as a science. All three of these researchers, each from a different country, published their rediscovery of Mendel's work within a two-month span in the spring of 1900. Mendel's results were quickly replicated, and genetic linkage quickly worked out. Biologists flocked to the theory; even though it was not yet applicable to many phenomena, it sought to give a genotypic understanding of heredity which they felt was lacking in previous studies of heredity, which had focused on phenotypic approaches. Most prominent of these previous approaches was the biometric school of Karl Pearson and W. F. R. Weldon, which was based heavily on statistical studies of phenotype variation. The strongest opposition to this school came from William Bateson, who perhaps did the most in the early days of publicising the benefits of Mendel's theory (the word "genetics", and much of the discipline's other terminology, originated with Bateson). This debate between the biometricians and the Mendelians was extremely vigorous in the first two decades of the 20th century, with the biometricians claiming statistical and mathematical rigor, whereas the Mendelians claimed a better understanding of biology. Modern genetics shows that Mendelian heredity is in fact an inherently biological process, though not all genes of Mendel's experiments are yet understood. In the end, the two approaches were combined, especially by work conducted by R. A. Fisher as early as 1918. The combination, in the 1930s and 1940s, of Mendelian genetics with Darwin's theory of natural selection resulted in the modern synthesis of evolutionary biology. In 1936, R.A. Fisher, a prominent statistician and population geneticist, reconstructed Mendel's experiments, analyzed results from the F2 (second filial) generation and found the ratio of dominant to recessive phenotypes (e.g. green versus yellow peas; round versus wrinkled peas) to be implausibly and consistently too close to the expected ratio of 3 to 1. Fisher asserted that "the data of most, if not all, of the experiments have been falsified so as to agree closely with Mendel's expectations," Mendel's alleged observations, according to Fisher, were "abominable", "shocking", and "cooked". Other scholars agree with Fisher that Mendel's various observations come uncomfortably close to Mendel's expectations. Anthony W.F. Edwards, for instance, remarks: "One can applaud the lucky gambler; but when he is lucky again tomorrow, and the next day, and the following day, one is entitled to become a little suspicious". Three other lines of evidence likewise lend support to the assertion that Mendel’s results are indeed too good to be true. Fisher's analysis gave rise to the Mendelian paradox, a paradox that remains unsolved to this very day. Thus, on the one hand, Mendel's reported data are, statistically speaking, too good to be true; on the other, "everything we know about Mendel suggests that he was unlikely to engage in either deliberate fraud or in unconscious adjustment of his observations." A number of writers have attempted to resolve this paradox. One attempted explanation invokes confirmation bias. Fisher accused Mendel's experiments as "biased strongly in the direction of agreement with expectation ... to give the theory the benefit of doubt". This might arise if he detected an approximate 3 to 1 ratio early in his experiments with a small sample size, and, in cases where the ratio appeared to deviate slightly from this, continued collecting more data until the results conformed more nearly to an exact ratio. In his 2004 article, J.W. Porteous concluded that Mendel's observations were indeed implausible. However, reproduction of the experiments has demonstrated that there is no real bias towards Mendel's data. Another attempt to resolve the Mendelian paradox notes that a conflict may sometimes arise between the moral imperative of a bias-free recounting of one's factual observations and the even more important imperative of advancing scientific knowledge. Mendel might have felt compelled “to simplify his data in order to meet real, or feared, editorial objections.” Such an action could be justified on moral grounds (and hence provide a resolution to the Mendelian paradox), since the alternative—refusing to comply—might have retarded the growth of scientific knowledge. Similarly, like so many other obscure innovators of science, Mendel, a little known innovator of working-class background, had to “break through the cognitive paradigms and social prejudices of his audience. If such a breakthrough “could be best achieved by deliberately omitting some observations from his report and adjusting others to make them more palatable to his audience, such actions could be justified on moral grounds.” Daniel L. Hartl and Daniel J. Fairbanks reject outright Fisher's statistical argument, suggesting that Fisher incorrectly interpreted Mendel's experiments. They find it likely that Mendel scored more than 10 progeny, and that the results matched the expectation. They conclude: "Fisher's allegation of deliberate falsification can finally be put to rest, because on closer analysis it has proved to be unsupported by convincing evidence." In 2008 Hartl and Fairbanks (with Allan Franklin and AWF Edwards) wrote a comprehensive book in which they concluded that there were no reasons to assert Mendel fabricated his results, nor that Fisher deliberately tried to diminish Mendel's legacy. Reassessment of Fisher's statistical analysis, according to these authors, also disproves the notion of confirmation bias in Mendel's results.
https://en.wikipedia.org/wiki?curid=12562
Grappling Grappling, in hand-to-hand combat, is a sport that consists of gripping or seizing the opponent. Similarly to wrestling, grappling is used at close range to gain a physical advantage over an opponent such as imposing a position, or to cause injury to the opponent. Grappling covers techniques used in many disciplines, styles and martial arts that are practiced both as combat sports and for self-defense. Grappling contests often involve takedowns and ground control, and may end when a contestant concedes defeat, also known as a submission or tap out. Grappling most commonly does not include striking or the use of weapons. However, some fighting styles or martial arts known especially for their grappling techniques teach tactics that include strikes and weapons either alongside grappling or combined with it. Grappling techniques can be broadly subdivided into clinch fighting; takedowns and throws; submission holds and pinning or controlling techniques; and sweeps, reversals, turnovers, and escapes. The degree to which grappling is utilized in different fighting systems varies. Some systems, such as amateur wrestling, Pehlwani, Pehlwani submission wrestling, judo, sumo, and Brazilian jiu-jitsu are exclusively grappling arts and do not allow striking. Many combat sports, such as shooto and mixed martial arts competitions use grappling while retaining striking as part of the sport Grappling is not allowed in some martial arts and combat sports, usually for the sake of focusing on other aspects of combat such as punching, kicking or mêlée weapons. Opponents in these types of matches, however, still grapple with each other occasionally when fatigued or in pain; when either occurs, the referee will step in and restart the match, sometimes giving a warning to one or both of the fighters. Examples of these include boxing, kickboxing, taekwondo, karate, and fencing. While prolonged grappling in Muay Thai will result in a separation of the competitors, the art extensively uses the clinch hold known as a double collar tie. Grappling techniques and defenses to grappling techniques are also considered important in self-defense applications and in law enforcement. The most common grappling techniques taught for self-defense are escapes from holds and application of pain compliance techniques. Grappling can be trained for self-defense, sport, and mixed martial arts (MMA) competition. Stand-up grappling is arguably an integral part of all grappling and clinch fighting arts, considering that two combatants generally start fighting from a stand-up position. The aim of stand-up grappling varies according to the martial arts or combat sports in question. Defensive stand-up grappling concerns itself with pain-compliance holds and escapes from possible grappling holds applied by an opponent, while offensive grappling techniques include submission holds, trapping, takedowns and throws, all of which can be used to inflict serious damage, or to move the fight to the ground. Stand-up grappling can also be used both offensively and defensively simultaneously with striking, either to trap an opponents arms while striking, prevent the opponent from obtaining sufficient distance to strike effectively, or to bring the opponent close to apply, for instance, knee strikes. In combat sports, stand-up grappling usually revolves around successful takedowns and throws. Grappling is a major part of combat glima and Løse-tak sport glima, and the fight continues on the ground if both combatants end up there. In other martial sports such as MMA, the fight may continue on the ground. Ground grappling refers to all the grappling techniques that are applied while the grapplers are no longer in a standing position. A large part of most martial arts and combat sports which feature ground grappling is positioning and obtaining a dominant position. A dominant position (usually on top) allows the dominant grappler a variety of options, including: attempting to escape by standing up, obtaining a pin or hold-down to control and exhaust the opponent, executing a submission hold, or striking the opponent. The bottom grappler is, on the other hand, concerned with escaping the situation and improving their position, typically by using a sweep or reversal. In some disciplines, especially those where the guard is used, the bottom grappler may also be able to finish the fight from the bottom by a submission hold. Some people feel more confident on the bottom because of the large number of submissions that can be accomplished from having the opponent in full-guard. When unskilled fighters get embroiled in combat, a common reaction is to grab the opponent in an attempt to slow the situation down by holding them still, resulting in an unsystematic struggle that relies on brute force. A skilled fighter, in contrast, can perform takedowns as a way of progressing to a superior position such as a Mount (grappling) or side control, or using clinch holds and ground positions to set up strikes, choke holds, and joint locks. A grappler who has been taken down to the ground can use defensive positions such as the Guard (grappling), which protects against being mounted or attacked. If a grappler is strong and can utilize leverage well, a takedown or throw itself can be a fight-ending maneuver; the impact can render an opponent unconscious. On the other hand, grappling also offers the possibility of controlling an opponent without injuring them. For their reason, most police staff receive some training in grappling. Likewise, grappling sports have been devised so that their participants can compete using full physical effort without injuring their opponents. Grappling is called "dumog" in Eskrima. The term "chin na" in Chinese martial arts deals with the use of grappling to achieve submission or incapacitation of the opponent (these may involve the use of acupressure points). Some Chinese martial arts, aikido, some eskrima systems, the Viking martial art of glima, as well as medieval and Renaissance European martial arts, practice grappling while one or both participants is armed. Their practice is significantly more dangerous than unarmed grappling and generally requires a great deal of training. There are many different regional styles of grappling around the world that are practiced within a limited geographic area or country. Several grappling styles like Sport judo, shoot wrestling, catch wrestling, Grappling, Brazilian jiu-jitsu, Sport Sambo and several types of wrestling including freestyle and Greco-Roman have gained global popularity. Judo, Freestyle Wrestling, and Greco-Roman Wrestling are Olympic Sports while Grappling, Brazilian Jiu-jitsu and Sambo have their own World Championship Competitions. Other known grappling-oriented systems are shuai jiao, malla-yuddha and aikido. In these arts, the object is either to take down and pin the opponent, or to catch the adversary in a specialized chokehold or joint lock which forces them to submit and admit defeat or be rendered helpless (unconscious or broken limbs). There are two forms of dress for grappling that dictate pace and style of action: with a jacket, such as a "gi" or kurtka, and without (No-Gi). The jacket, or "gi", form most often utilizes grips on the cloth to control the opponent's body, while the "no-"gi"" form emphasizes body control of the torso and head using only the natural holds provided by the body. The use of a jacket is compulsory in judo competition, sambo competition, and most Brazilian jiu-jitsu competition, as well as a variety of folk wrestling styles around the world. Jackets are not used in many forms of wrestling, such as Olympic Freestyle, Greco-Roman wrestling and Grappling. Grappling techniques are also used in mixed martial arts along with striking techniques. Strikes can be used to set up grappling techniques and vice versa. The ADCC Submission Wrestling World Championship is the most prestigious full range (takedown, position, and submission inclusive) grappling tournament in the world and is only held once every two years. The World Jiu-Jitsu Championship, also commonly called the Mundials (Portuguese for "Worlds"), is the most prestigious jacketed full range (takedown, position, and submission inclusive) grappling tournament in the world. The event also hosts a non-jacketed division (no gi), but that sub-event is not as prestigious as ADCC in terms of pure non-jacketed competition. United World Wrestling (UWW) is the international governing body for the sport of wrestling. It presides over international competitions for various forms of wrestling, including Grappling for men and women. The flagship Grappling's event of UWW is the Grappling World Championships. The North American Grappling Association (NAGA) is an organization started in 1995 that holds Submission Grappling and Brazilian Jiu-Jitsu tournaments throughout North America and Europe. NAGA is the largest submission grappling association in the world with over 175,000 participants worldwide, including some of the top submission grapplers and MMA fighters in the world. NAGA grappling tournaments consist of gi and no-gi divisions. No-Gi competitors compete under rules drafted by NAGA. Gi competitors compete under standardized Brazilian Jiu-Jitsu rules. Notable Champions Frank Mir, Joe Fiorentino, Jon Jones, Khabib Nurmagomedov, and Anthony Porcelli. GRiND is the first Indian Pro Grappling tournament series started in May 2017 conducting grappling championships(position and submission included). There is a first time no "Gi" event series in India.
https://en.wikipedia.org/wiki?curid=12564
George Mason University George Mason University (Mason, GMU, or George Mason) is a public research university located in Fairfax County near Fairfax City in Virginia. In 1956, the Commonwealth of Virginia authorized the establishment of a Northern Virginia branch of the University of Virginia and the institution that is now named George Mason University opened in September 1957. It became an independent institution in 1972. It has since grown to become the largest four-year public university in the Commonwealth of Virginia. The university is named after the founding father George Mason, a Virginia planter and politician who authored the Virginia Declaration of Rights that later influenced the future Bill of Rights of the United States Constitution. Mason operates four campuses in Virginia (Fairfax, Arlington, Front Royal, and Prince William), as well as a fifth campus in South Korea. The university is classified among "R1: Doctoral Universities – Very high research activity". It is particularly well known in the fields of economics and law and economics. Two Mason economics professors have won the Nobel Memorial Prize in Economics: James M. Buchanan in 1986 and Vernon L. Smith in 2002. EagleBank Arena (formerly the Patriot Center), a 10,000-seat arena and concert venue operated by the university, is located on the main Fairfax campus. The university recognizes 500 student groups as well as 41 fraternities and sororities. The University of Virginia in Charlottesville created an extension center to serve Northern Virginia. "… the University Center opened, on October 1, 1949..." The extension center offered both for credit and non-credit informal classes in the evenings in the Vocational Building of the Washington-Lee High School in Arlington, Virginia, at schools in Alexandria, Fairfax, and Prince William, at federal buildings, at churches, at the Virginia Theological Seminary, and at Marine Corps Base Quantico, and even in a few private homes. The first for credit classes offered were: "Government in the Far East, Introduction to International Politics, English Composition, Principles of Economics, Mathematical Analysis, Introduction to Mathematical Statistics, and Principles of Lip Reading." By the end of 1952, enrollment increased to 1,192 students from 665 students the previous year. A resolution of the Virginia General Assembly in January 1956 changed the extension center into University College, the Northern Virginia branch of the University of Virginia. John Norville Gibson Finley served as director. Seventeen freshmen students attended classes at University College in a small renovated elementary school building in Bailey's Crossroads starting in September 1957. In 1958 University College became George Mason College. The City of Fairfax purchased and donated of land just south of the city limits to the University of Virginia for the college's new site, which is now referred to as the Fairfax Campus. In 1959, the Board of Visitors of the University of Virginia selected a permanent name for the college: George Mason College of the University of Virginia. The Fairfax campus construction planning that began in early 1960 showed visible results when the development of the first of Fairfax Campus began in 1962. In the Fall of 1964 the new campus welcomed 356 students. During the 1966 Session of the Virginia General Assembly, Alexandria delegate James M. Thomson, with the backing of the University of Virginia, introduced a bill in the General Assembly to make George Mason College a four-year institution under the University of Virginia's direction. The measure, known as H 33, passed the Assembly easily and was approved on March 1, 1966 making George Mason College a degree-granting institution. During that same year, the local jurisdictions of Fairfax County, Arlington County, and the cities of Alexandria and Falls Church agreed to appropriate $3 million to purchase land adjacent to Mason to provide for a Fairfax Campus with the intention that the institution would expand into a regional university of major proportions, including the granting of graduate degrees. On Friday, April 7, 1972, a contingent from George Mason College, led by Chancellor Lorin A. Thompson, met with Virginia Governor A. Linwood Holton at Richmond. They were there to participate in the governor's signing into law Virginia General Assembly Bill H 210 separating George Mason College from the University of Virginia at Charlottesville and renaming it George Mason University. In 1978, George W. Johnson was appointed to serve as the fourth president. Under his eighteen-year tenure, the university expanded both its physical size and program offerings at a tremendous rate. Shortly before Johnson's inauguration in April 1979, Mason acquired the School of Law and the new Arlington Campus. The university also became a doctoral institution. Toward the end of Johnson's term, Mason would be deep in planning for a third campus in Prince William County at Manassas. Major campus facilities, such as Student Union Building II, EagleBank Arena, Center for the Arts, and the Johnson Learning Center, were all constructed over the course of Johnson's eighteen years as University President. Enrollment once again more than doubled from 10,767 during the fall of 1978 to 24,368 in the spring of 1996. Dr. Alan G. Merten was appointed president in 1996. He believed that the university's location made it responsible for both contributing to and drawing from its surrounding communities—local, national, and global. George Mason was becoming recognized and acclaimed in all of these spheres. During Merten's tenure, the university hosted the World Congress of Information Technology in 1998, celebrated a second Nobel Memorial Prize-winning faculty member in 2002, and cheered the Men's Basketball team in their NCAA Final Four appearance in 2006. Enrollment increased from just over 24,000 students in 1996 to approximately 33,000 during the spring semester of 2012, making Mason Virginia's largest public university and gained prominence at the national level. Dr. Ángel Cabrera officially took office on July 1, 2012. Both Cabrera and the board were well aware that Mason was part of a rapidly changing academia, full of challenges to the viability of higher education. In a resolution on August 17, 2012, the board asked Dr. Cabrera to create a new strategic vision that would help Mason remain relevant and competitive in the future. The drafting of the Vision for Mason, from conception to official outline, created a new mission statement that defines the university. On March 25, 2013, university president Ángel Cabrera held a press conference to formally announce the university's decision to leave the Colonial Athletic Association to join the Atlantic 10 Conference (A-10). The announcement came just days after the Board of Visitors' approval of the university's Vision document that Dr. Cabrera had overseen. Mason began competition in the A-10 during the 2013–2014 academic year, and Mason's association with the institutions that comprise the A-10 started a new chapter in Mason athletics, academics, and other aspects of university life. "The Chronicle of Higher Education" listed Mason as one of the "Great Colleges to Work For" from 2010–2014. "The Washington Post" listed Mason as one of the "Top Workplaces" in 2014. The WorldatWork Alliance for Work-Life Progress awarded Mason the Seal of Distinction in 2015. The AARP listed Mason as one of the Best Employers for Workers Over 50 in 2013. Phi Beta Kappa established a chapter at the university in 2013. In 2018, a Freedom of Information Act lawsuit revealed that conservative donors, including the Charles Koch Foundation and Federalist Society, were given direct influence over faculty hiring decisions at the university's law and economics schools. GMU President Ángel Cabrera acknowledged that the revelations raised questions about the university's academic integrity and pledged to prohibit donors from sitting on faculty selection committees in the future. Dr. Ángel Cabrera resigned his position and became president of Georgia Tech. The interim president was Anne B. Holton. On February 24, 2020, the Board of Visitors appointed Dr. Gregory Washington as the eighth president. He started at George Mason on July 1, 2020. Dr. Washington is the university's first African-American president. George Mason University has four campuses in the United States, all within the Commonwealth of Virginia. Three are within the Northern Virginia section of the Piedmont, and one in the Blue Ridge Mountains region. The university has one campus in South Korea, within the Incheon Free Economic Zone of the Songdo region. The university had a campus at Ras al-Khaimah, but that location is now closed. The Blue Ridge campus, just outside Front Royal, is run in cooperation with the Smithsonian Institution. The university's Fairfax Campus is situated on of landscaped land with a large pond in a suburban environment in George Mason, Virginia, just south of the City of Fairfax in central Fairfax County. Off-campus amenities are within walking distance and Washington, D.C. is approximately from campus. Notable buildings include the student union building, the Johnson Center; the Center for the Arts, a 2,000-seat concert hall; the Long and Kimmy Nguyen Engineering Building; Exploratory Hall for science, new in 2013; an astronomy observatory and telescope; the Art and Design Building; the newly expanded Fenwick Library, and will soon reconstruct the academic buildings Robinson A and B; the Krasnow Institute; and three fully appointed gyms and an aquatic center for student use. The stadiums for indoor and outdoor track and field, baseball, softball, tennis, soccer and lacrosse are also on the Fairfax campus, as is Masonvale, a housing community for faculty, staff and graduate students. The smallest building on the campus is the information booth. This campus is served by the Washington Metro Orange Line at the Vienna, Fairfax, GMU station as well as Metrobus routes. The CUE Bus Green One, Green Two, Gold One, and Gold Two lines all provide service to this campus at . This campus is served by the Virginia Railway Express Manassas Line at the Burke Center station. Fairfax Connector Route 306: GMU–Pentagon provides service to this campus. Mason provides shuttle service between this campus and Vienna, Fairfax, GMU Metro station, the Burke Center VRE station, the Science and Technology Campus, West Campus, and downtown City of Fairfax. The bronze statue of George Mason on campus was created by Wendy M. Ross and dedicated on April 12, 1996. The 7 foot statue shows George Mason presenting his first draft of the Virginia Declaration of Rights which was later the basis for the U.S. Constitution's Bill of Rights. Beside Mason is a model of a writing table that is still in the study of Gunston Hall, Mason's Virginia estate. The books on the table—volumes of Hume, Locke and Rousseau—represent influences in his thought. The Arlington Campus is situated on in Virginia Square, a bustling urban environment on the edge of Arlington, Virginia's Clarendon business district and from downtown Washington, D.C. The campus was founded in 1979 with the acquisition of a law school; in 1998 Hazel Hall opened to house the Mason School of Law; subsequent development created Van Metre Hall (formerly Founders Hall), home of the Schar School of Policy and Government, the Center for Regional Analysis, and the graduate-level administrative offices for the School of Business. Vernon Smith Hall houses the School for Conflict Analysis and Resolution, the Mercatus Center, and the Institute for Humane Studies. The campus also houses the 300-seat Founders Hall Auditorium. The Arlington Campus is also the future home of the Mason School of Computing, a plan intended to double the number of computer science students to 15,000 over the next five years. As part of Amazon's HQ2 development in nearby Crystal City, Mason announced a slew of new changes to be made to the Arlington Campus. It committed to expanding the campus and replacing Original Building with a 400,000sq ft mixed use high rise which would developed in a public private partnership to allow for mixed use commercial space on lower levels. The university also announced the development of an Institute for Digital InnovAtion (IDIA) to include labs, coworking and public programming spaces, ground-floor retail, a parking garage and a public plaza The university has said they will invest $250 million at the Arlington Campus in the next five years, adding 1,000 faculty members and enlarging the campus to 1.2 million square feet, with an emphasis on computing programs and advanced research in high-tech fields. This campus is served by the Washington Metro Orange Line at the Virginia Square-GMU station, a campus shuttle service, and Metrobus route 38B. The rail station is located one block west of the campus. Arlington Rapid Transit or ART Bus routes 41, 42, and 75 also provide service at this location. The campus offers one electric vehicle charging station, five disabled permit automotive parking locations, three bicycle parking locations, and one Capitol Bikeshare location. The Science and Technology campus opened on August 25, 1997 as the Prince William campus in Manassas, Virginia, on of land, some still currently undeveloped. More than 4,000 students are enrolled in classes in bioinformatics, biotechnology, information technology, and forensic biosciences educational and research programs. There are undergraduate programs in health, fitness and recreation. There are graduate programs in exercise, fitness, health, geographic information systems, and facility management. Much of the research takes place in the high-security Biomedical Research Laboratory. The 1,123-seat Merchant Hall and the 300-seat Verizon Auditorium in the Hylton Performing Arts Center opened in 2010. The 110,000-square-foot Freedom Aquatic and Fitness Center is operated by the Mason Enterprise Center. The Mason Center for Team and Organizational Learning stylized as EDGE is an experiential education facility open to the public. The Sports Medicine Assessment Research and Testing lab stylized as SMART Lab is located within the Freedom center. The SMART Lab is most known for its concussion research. On April 23, 2015 the campus was renamed to the Science and Technology Campus. In 2019, the university engaged in a feasibility study of creating a medical school at the Prince William Campus. The proposed medical school would be completed in 2022. The campus in Front Royal, Virginia is a collaboration between the Smithsonian Institution and the university. Open to students in August 2012 after breaking ground on the project on June 29, 2011, the primary focus of the campus is global conservation training. The Volgenau Academic Center includes three teaching laboratories, four classrooms, and 18 offices. Shenandoah National Park is visible from the dining facility's indoor and outdoor seating. Living quarters include 60 double occupancy rooms, an exercise facility, and study space. Opened in March 2014, the Songdo campus is in South Korea's Incheon Free Economic Zone, a site designed for 850,000 people. It's from Seoul and a two-hour flight from China and Japan, and is connected to the Seoul Metropolitan Subway. The Commonwealth of Virginia considers the Songdo campus legally no different than any other Mason campus, "... board of visitors shall have the same powers with respect to operation and governance of its branch campus in Korea as are vested in the board by the Code of Virginia with respect to George Mason University in Virginia ..." Mason Korea students will spend the sixth and seventh semesters (one year) on the Fairfax Campus, with all other course work to be completed in Songdo. George Mason University Korea offers seven undergraduate programs: Management, Finance, Accounting, Economics, Global Affairs, Conflict Analysis and Resolution, and Computer Game Design. Mason Korea also has two graduate programs: Systems Engineering and IB & ESOL. Mason Korea's first commencement class graduated in December 2017. Students from Mason Korea earn the same diploma as home campus students, with English as the language of instruction. Mason offers undergraduate, master's, law, and doctoral degrees. The student-faculty ratio is 17:1; 58 percent of undergraduate classes have fewer than 30 students and 30 percent of undergraduate classes have fewer than 20 students. Between 2009 and 2013, George Mason saw a 21% increase in the number of applications, has enrolled 4% more new degree-seeking students, and has seen the percentage of undergraduate and graduate applications accepted each decrease by 4%. Law applications accepted increased by 10%. Mason enrolled 33,917 students for Fall 2013, up 956 (+3%) from Fall 2012. Undergraduate students made up 65% (21,990) of the fall enrollment, graduate students 34% (11,399), and law students 2% (528). Undergraduate headcount was 1,337 higher than Fall 2012 (+7%); graduate headcount was 262 lower (−2%); and law student headcount was 119 lower (−18%). Matriculated students come from all 50 states and 122 foreign countries. As of fall 2014, the university had 33,791 students enrolled, including 21,672 undergraduates, 7,022 seeking master's degrees, 2,264 seeking doctoral degrees and 493 seeking law degrees. The university enrolls 34,904 students, making it the largest university by head count in the Commonwealth of Virginia. George Mason University is accredited by the Commission on Colleges of the Southern Association of Colleges and Schools (SACSCOC) to award bachelor's, master's, and doctoral degrees. George Mason University, an institution dedicated to research of consequence, hosts $149 million in sponsored research projects annually, as of 2019. In 2016, Mason was classified by the Carnegie Classification of Institutions of Higher Education among the U.S. universities that receive the most research funding and award research/scholarship doctorates. Mason moved into this classification based on a review of its 2013–2014 data that was performed by the Center for Postsecondary Research at Indiana University. The research is focused on health, sustainability and security. In health, researchers focus is on wellness, disease prevention, advanced diagnostics and biomedical analytics. Sustainability research examines climate change, natural disaster forecasting, and risk assessment. Mason's security experts study domestic and international security as well as cyber security. The university is home to numerous research centers and institutes. Mason has established far-reaching research partnerships with many government agencies, non-profits, health systems, and international finance organizations. Among others, Mason researches computer systems and networks with the Defense Advanced Research Agency (DARPA); investigates climate issues with the National Aeronautics and Space administration (NASA); explores underwater archaeology with the National Oceanic and Atmospheric Administration (NOAA); partners on conservation and biological matters with the Smithsonian institution; studies brain neurons with The Allen Institute; conducts economic research with the International Monetary Fund; and examines chronic illnesses and disabilities with the Inova Health System. Students will decorate the George Mason statue on the Fairfax campus for events, some rub the statue toe to bring good luck, and many pose with the statue for graduation photographs. Between 1988 and 1990 Anthony Maiello wrote the original "George Mason Fight Song", which was edited by Michael Nickens in 2009. Each spring, student organizations at Mason compete to paint one of the 38 benches located on the Quad in front of Fenwick Library. For years, student organizations have painted those benches that line the walkway to gain recognition for their group. With more than 300 student organizations, there is much competition to paint one of the benches. Painting takes place in the spring. Every year since 1965, George Mason University hosts an annual celebration called Mason Day. Mason Day brings food trucks, carnival rides, local artists, and notable performers to campus for the students to de-stress before finals. The event is typically free for students and $20 for the general public. On the Fairfax campus the northernmost housing is technically on campus, but about a mile from the center of campus, about a half mile from the edge of the majority of the Fairfax campus in the housing area known as the Townhouses. On the eastern edge of the Fairfax campus lies Masonvale, houses intended for graduate students and visiting faculty. On the southern edge of the Fairfax campus you will find President's Park, Liberty Square, and Potomac Heights. On the western side of the Fairfax campus, near Ox Road/Rt 123, are the Mason Global Center, Whitetop, and Rogers. The Student Apartments off Aquia Creek Lane were torn down in 2019. Closer to the center of the Fairfax campus are the residence halls along Chesapeake Lane, named: Northern Neck, Commonwealth, Blue Ridge, Sandbridge, Piedmont, and Tidewater, as well as Hampton Roads, Dominion, Eastern Shore, and the Commons. At the Science and Technology (SciTech) campus near Manassas, Virginia, west of Fairfax, Beacon Hall was designed for graduate student housing. west of Fairfax, the G.T. Halpin Family Living & Learning Community is on the Smithsonian-Mason School of Conservation campus. west of Fairfax, Student's Hall and Guest House are on the Songdo campus. On-campus dining halls George Mason University has seven dining halls in addition to its other restaurants and dining options. George Mason University's Fairfax Campus is the first U.S. campus to include robot food delivery in its meal plans. 25 autonomous robots were provided by the Estonian robotics company Starship Technologies to carry out meal deliveries. The cost of a delivery, as of November 2019, is $1.99. Student organizations can have an academic, social, athletic, religious/irreligious, career, or just about any other focus. The university recognizes 500 such groups. Mason sponsors several student-run media outlets through the Office of Student Media. Mason has 41 fraternities and sororities, with a total Greek population of about 1,800. Mason does not have a traditional "Greek Row" of housing specifically for fraternities, although recruitment, charitable events—including a spring Greek Week—and other chapter activities take place on the Fairfax Campus. George Mason University is a public government-funded university that has to comply with the First Amendment of the United States Constitution. The university, as being part of the government of the Commonwealth of Virginia, cannot endorse or establish a religion, nor can it impede the "free exercise of religion" of its students. Therefore, independent religious student-led organizations can register with the university in order to minister to the students at their own choosing. The registered student religious organizations are as follows: The George Mason Patriots are the athletic teams of George Mason University located in Fairfax, Virginia. The Patriots compete in Division I of the National Collegiate Athletic Association as members of the Atlantic 10 Conference for most sports. About 485 student-athletes compete in 22 men's and women's Division I sports – baseball, basketball, cross-country, golf, lacrosse, rowing, soccer, softball, swimming and diving, tennis, indoor and outdoor track and field, volleyball, and wrestling. Intercollegiate men's and women's teams are members of the National Athletic Association (NCAA) Division I, the Atlantic 10, the Eastern College Athletic Conference (ECAC), the Eastern Intercollegiate Volleyball Association (EIVA), the Eastern Wrestling League (EWL), and the Intercollegiate Association of Amateur Athletes of America (IC4A). In addition to its NCAA Division I teams, George Mason University has several club sports. The club sports offer students a chance to compete at a high level without the time commitment of a D-I/Varsity team in sports including – badminton, baseball, basketball (women's), bowling, cricket, crew, cycling, equestrian, fencing, field hockey, football, ice hockey, lacrosse (men's and women's), paintball, powerlifting, quidditch, rugby (men's and women's), running, soccer (men's and women's), swimming, tae kwon do, trap & skeet, triathlon, ultimate frisbee (men's and women's), volleyball (men's and women's), wrestling, and underwater hockey. Clubs have a competitive range from regional competition to yearly participation in U.S. National College Club Level Championships Mason Players The Mason Players is a faculty lead student organization that produces six productions. This season includes two "Main Stage" productions, which are directed by faculty members or guest artists. As well as "Studio" productions, which are directed by students through an application process within Mason Players. There is also an annual production of "Originals", which consists of 10 minute original plays written by students. Full time students of George Mason University, both outside and a part of the School of Theater are allowed to audition for these productions. George Mason University has been subject to controversy surrounding donations from the Charles Koch Foundation. University documents revealed that the Koch brothers were given the ability to pick candidates as a condition of monetary donations. George Mason University altered its donor rules following the controversy. George Mason University has been subject to many accusations of mishandling sexual assault and misconduct allegations. In 2016 a male student won an appeal overturning his suspension for sexual assault. The courts found that Brent Ericson, who had prior knowledge of this and previous cases against the student, did not give the student the ability to defend himself, as he suspended the student for prior, unrelated incidences. Brent Ericson has also been accused of sharing home addresses in a sexual misconduct case. The Title IX process at George Mason University has continued to be subject to controversy. Following the hiring of Brett Kavanaugh, students circulated a petition demanding not only the removal of Kavanaugh, but to increase the number of Title IX Coordinators on campus. The petition received 10,000 signatures and resulted in approval for funding for two more Title IX Coordinator positions. However, as of 2020, George Mason University only has one Title IX Coordinator. At least one student has publicly alleged that George Mason University mishandles Title IX investigations. Two professors have been accused of sexual misconduct at George Mason. In 2018, Peter Pober was alleged to have committed sexual misconduct during his tenure as a Competitive Speech Coach. He retired while being investigated for misconduct. In 2020, Todd Kashdan sued George Mason University for gender bias, after he was sanctioned for sexual harassment by Title IX procedures. The lawsuit was not upheld as Kashdan failed to show sufficient grounds for complaint, with the judge noting that Kashdan admitted to many of the accusations. George Mason University economist Robin Hanson stirred controversy in 2018, when he argued for “redistribution” policies for sex three days after the Toronto Van attack. Further controversy was raised when archives of his previous writing, in which he argued infidelity is comparable to “gentle silent rape”, were read. In 2016, George Mason's law school was briefly named the Antonin Scalia School of Law. Following the realization that this would lead to an offensive acronym ("ASSLaw"), the school was quickly renamed to the Antonin Scalia Law School.
https://en.wikipedia.org/wiki?curid=12566
Grammar In linguistics, grammar (from Ancient Greek ) is the set of structural rules governing the composition of clauses, phrases and words in a natural language. The term refers also to the study of such rules and this field includes phonology, morphology and syntax, often complemented by phonetics, semantics and pragmatics. Fluent speakers of a language variety or "lect" have a set of internalized rules which constitutes its grammar. The vast majority of the information in the grammar is – at least in the case of one's native language – acquired not by conscious study or instruction but by hearing other speakers. Much of this work is done during early childhood; learning a language later in life usually involves more explicit instruction. Thus, grammar is the cognitive information underlying language use. The term "grammar" can also describe the rules which govern the linguistic behavior of a group of speakers. For example, the term "English grammar" may refer to the whole of English grammar; that is, to the grammars of all the speakers of the language, in which case the term encompasses a great deal of variation.
https://en.wikipedia.org/wiki?curid=12569
Gigabyte The gigabyte () is a multiple of the unit byte for digital information. The prefix "giga" means 109 in the International System of Units (SI). Therefore, one gigabyte is one billion bytes. The unit symbol for the gigabyte is GB. This definition is used in all contexts of science, engineering, business, and many areas of computing, including hard drive, solid state drive, and tape capacities, as well as data transmission speeds. However, the term is also used in some fields of computer science and information technology to denote (10243 or 230) bytes, particularly for sizes of RAM. The use of "gigabyte" may thus be ambiguous. Hard disk capacities as described and marketed by drive manufacturers using the standard metric definition of the gigabyte, but when a 400 GB drive's capacity is displayed by, for example, Microsoft Windows, it is reported as 372 GB, using a binary interpretation. To address this ambiguity, the International System of Quantities standardizes the binary prefixes which denote a series of integer powers of 1024. With these prefixes, a memory module that is labeled as having the size "" has one gibibyte () of storage capacity. Using the ISQ definitions, the "372 GB" reported for the hard drive is actually 372 GiB (400 GB). The term "gigabyte" is commonly used to mean either 10003 bytes or 10243 bytes. The latter binary usage originated as compromise technical jargon for byte multiples that needed to be expressed in a power of 2, but lacked a convenient name. As 1024 (210) is approximately 1000 (103), roughly corresponding to SI multiples, it was used for binary multiples as well. In 1998 the International Electrotechnical Commission (IEC) published standards for binary prefixes, requiring that the gigabyte strictly denote 10003 bytes and gibibyte denote 10243 bytes. By the end of 2007, the IEC Standard had been adopted by the IEEE, EU, and NIST, and in 2009 it was incorporated in the International System of Quantities. Nevertheless, the term gigabyte continues to be widely used with the following two different meanings: Based on powers of 10, this definition uses the prefix giga- as defined in the International System of Units (SI). This is the recommended definition by the International Electrotechnical Commission (IEC). This definition is used in networking contexts and most storage media, particularly hard drives, flash-based storage, and DVDs, and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance. The file manager of Mac OS X version 10.6 and later versions are a notable example of this usage in software, which report files sizes in decimal units. The binary definition uses powers of the base 2, as does the architectural principle of binary computers. This usage is widely promulgated by some operating systems, such as Microsoft Windows in reference to computer memory (e.g., RAM). This definition is synonymous with the unambiguous unit gibibyte. Since the first disk drive, the IBM 350, disk drive manufacturers expressed hard drive capacities using decimal prefixes. With the advent of gigabyte-range drive capacities, manufacturers based most consumer hard drive capacities in certain size classes expressed in decimal gigabytes, such as "500 GB". The exact capacity of a given drive model is usually slightly larger than the class designation. Practically all manufacturers of hard disk drives and flash-memory disk devices continue to define one gigabyte as , which is displayed on the packaging. Some operating systems such as OS X express hard drive capacity or file size using decimal multipliers, while others such as Microsoft Windows report size using binary multipliers. This discrepancy causes confusion, as a disk with an advertised capacity of, for example, (meaning ) might be reported by the operating system as , meaning 372 GiB. The JEDEC memory standards use IEEE 100 nomenclature which quote the gigabyte as (230 bytes). The difference between units based on decimal and binary prefixes increases as a semi-logarithmic (linear-log) function—for example, the decimal kilobyte value is nearly 98% of the kibibyte, a megabyte is under 96% of a mebibyte, and a gigabyte is just over 93% of a gibibyte value. This means that a 300 GB (279 GiB) hard disk might be indicated variously as 300 GB, 279 GB or 279 GiB, depending on the operating system. As storage sizes increase and larger units are used, these differences become even more pronounced. The most recent lawsuits arising from alleged consumer confusion over the binary and decimal definitions used for "gigabyte" have ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (10^9) bytes (the decimal definition) rather than the binary definition (2^30) for commercial transactions. Specifically, the courts held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' ... The California Legislature has likewise adopted the decimal system for all 'transactions in this state.'” Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled. Because of their physical design, the capacity of modern computer random access memory devices, such as DIMM modules, is always a multiple of a power of 1024. It is thus convenient to use prefixes denoting powers of 1024, known as binary prefixes, in describing them. For example, a memory capacity of is conveniently expressed as 1 GiB rather than as 1.074 GB. The former specification is, however, often quoted as "1 GB" when applied to random access memory. Software allocates memory in varying degrees of granularity as needed to fulfill data structure requirements and binary multiples are usually not required. Other computer capacities and rates, like storage hardware size, data transfer rates, clock speeds, operations per second, etc., do not depend on an inherent base, and are usually presented in decimal units. For example, the manufacturer of a "300 GB" hard drive is claiming a capacity of , not 300x10243 (which would be ) bytes. The "gigabyte" symbol is encoded by Unicode at code point ❰ ❱.
https://en.wikipedia.org/wiki?curid=12570
Galaxy groups and clusters Galaxy groups and clusters are the largest known gravitationally bound objects to have arisen thus far in the process of cosmic structure formation. They form the densest part of the large-scale structure of the Universe. In models for the gravitational formation of structure with cold dark matter, the smallest structures collapse first and eventually build the largest structures, clusters of galaxies. Clusters are then formed relatively recently between 10 billion years ago and now. Groups and clusters may contain ten to thousands of individual galaxies. The clusters themselves are often associated with larger, non-gravitationally bound, groups called superclusters. Groups of galaxies are the smallest aggregates of galaxies. They typically contain no more than 50 galaxies in a diameter of 1 to 2 megaparsecs (Mpc)(see 1022 m for distance comparisons). Their mass is approximately 1013 solar masses. The spread of velocities for the individual galaxies is about 150 km/s. However, this definition should be used as a guide only, as larger and more massive galaxy systems are sometimes classified as galaxy groups. Groups are the most common structures of galaxies in the universe, comprising at least 50% of the galaxies in the local universe. Groups have a mass range between those of the very large elliptical galaxies and clusters of galaxies. Our own Galaxy, the Milky Way, is contained in the Local Group of more than 54 galaxies. In July 2017 S. Paul, R. S. John et al. defined clear distinguishing parameters for classifying galaxy aggregations as ‘galaxy groups’ and ‘clusters’ on the basis of scaling laws that they followed. According to this paper, galaxy aggregations less massive than 8 × 1013 solar masses are classified as Galaxy groups. Clusters are larger than groups, although there is no sharp dividing line between the two. When observed visually, clusters appear to be collections of galaxies held together by mutual gravitational attraction. However, their velocities are too large for them to remain gravitationally bound by their mutual attractions, implying the presence of either an additional invisible mass component, or an additional attractive force besides gravity. X-ray studies have revealed the presence of large amounts of intergalactic gas known as the intracluster medium. This gas is very hot, between 107K and 108K, and hence emits X-rays in the form of bremsstrahlung and atomic line emission. The total mass of the gas is greater than that of the galaxies by roughly a factor of two. However, this is still not enough mass to keep the galaxies in the cluster. Since this gas is in approximate hydrostatic equilibrium with the overall cluster gravitational field, the total mass distribution can be determined. It turns out the total mass deduced from this measurement is approximately six times larger than the mass of the galaxies or the hot gas. The missing component is known as dark matter and its nature is unknown. In a typical cluster perhaps only 5% of the total mass is in the form of galaxies, maybe 10% in the form of hot X-ray emitting gas and the remainder is dark matter. Brownstein and Moffat use a theory of modified gravity to explain X-ray cluster masses without dark matter. Observations of the Bullet Cluster are the strongest evidence for the existence of dark matter; however, Brownstein and Moffat have shown that their modified gravity theory can also account for the properties of the cluster. Clusters of galaxies have been found in surveys by a number of observational techniques and have been studied in detail using many methods: Clusters of galaxies are the most recent and most massive objects to have arisen in the hierarchical structure formation of the Universe and the study of clusters tells one about the way galaxies form and evolve. Clusters have two important properties: their masses are large enough to retain any energetic gas ejected from member galaxies and the thermal energy of the gas within the cluster is observable within the X-Ray bandpass. The observed state of gas within a cluster is determined by a combination of shock heating during accretion, radiative cooling, and thermal feedback triggered by that cooling. The density, temperature, and substructure of the intracluster X-Ray gas therefore represents the entire thermal history of cluster formation. To better understand this thermal history one needs to study the entropy of the gas because entropy is the quantity most directly changed by increasing or decreasing the thermal energy of intracluster gas.
https://en.wikipedia.org/wiki?curid=12571
Grus (constellation) Grus (, or colloquially ) is a constellation in the southern sky. Its name is Latin for the crane, a type of bird. It is one of twelve constellations conceived by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. Grus first appeared on a celestial globe published in 1598 in Amsterdam by Plancius and Jodocus Hondius and was depicted in Johann Bayer's star atlas "Uranometria" of 1603. French explorer and astronomer Nicolas-Louis de Lacaille gave Bayer designations to its stars in 1756, some of which had been previously considered part of the neighbouring constellation Piscis Austrinus. The constellations Grus, Pavo, Phoenix and Tucana are collectively known as the "Southern Birds". The constellation's brightest star, Alpha Gruis, is also known as Alnair and appears as a 1.7-magnitude blue-white star. Beta Gruis is a red giant variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. Six star systems have been found to have planets: the red dwarf Gliese 832 is one of the closest stars to Earth to have a planetary system. Another—WASP-95—has a planet that orbits every two days. Deep-sky objects found in Grus include the planetary nebula IC 5148, also known as the Spare Tyre Nebula, and a group of four interacting galaxies known as the Grus Quartet. The stars that form Grus were originally considered part of the neighbouring constellation Piscis Austrinus (the southern fish), with Gamma Gruis seen as part of the fish's tail. The stars were first defined as a separate constellation by the Dutch astronomer Petrus Plancius, who created twelve new constellations based on the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the "Eerste Schipvaart", to the East Indies. Grus first appeared on a 35-centimetre-diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. Its first depiction in a celestial atlas was in the German cartographer Johann Bayer's "Uranometria" of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name "Den Reygher", "The Heron", but Bayer followed Plancius and Hondius in using Grus. An alternative name for the constellation, "Phoenicopterus" (Latin "flamingo"), was used briefly during the early 17th century, seen in the 1605 work "Cosmographiae Generalis" by Paul Merula of Leiden University and a c. 1625 globe by Dutch globe maker Pieter van den Keere. Astronomer Ian Ridpath has reported the symbolism likely came from Plancius originally, who had worked with both of these people. Grus and the nearby constellations Phoenix, Tucana and Pavo are collectively called the "Southern Birds". The stars that correspond to Grus were generally too far south to be seen from China. In Chinese astronomy, Gamma and Lambda Gruis may have been included in the tub-shaped asterism "Bàijiù", along with stars from Piscis Austrinus. In Central Australia, the Arrernte and Luritja people living on a mission in Hermannsburg viewed the sky as divided between them, east of the Milky Way representing Arrernte camps and west denoting Luritja camps. Alpha and Beta Gruis, along with Fomalhaut, Alpha Pavonis and the stars of Musca, were all claimed by the Arrernte. Grus is bordered by Piscis Austrinus to the north, Sculptor to the northeast, Phoenix to the east, Tucana to the south, Indus to the southwest, and Microscopium to the west. Bayer straightened the tail of Piscis Austrinus to make way for Grus in his "Uranometria". Covering 366 square degrees, it ranks 45th of the 88 modern constellations in size and covers 0.887% of the night sky. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Gru". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined as a polygon of 6 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.31° and −56.39°. Grus is located too far south to be seen by observers in the British Isles and the northern United States, though it can easily be seen from Florida or California; the whole constellation is visible to observers south of latitude 33°N. Keyser and de Houtman assigned twelve stars to the constellation. Bayer depicted Grus on his chart, but did not assign its stars Bayer designations. French explorer and astronomer Nicolas-Louis de Lacaille labelled them Alpha to Phi in 1756 with some omissions. In 1879, American astronomer Benjamin Gould added Kappa, Nu, Omicron and Xi, which had all been catalogued by Lacaille but not given Bayer designations. Lacaille considered them too faint, while Gould thought otherwise. Xi Gruis had originally been placed in Microscopium. Conversely, Gould dropped Lacaille's Sigma as he thought it was too dim. Grus has several bright stars. Marking the left wing is Alpha Gruis, a blue-white star of spectral type B6V and apparent magnitude 1.7, around 101 light-years from Earth. Its traditional name, Alnair, means "the bright one" and refers to its status as the brightest star in Grus (although the Arabians saw it as the brightest star in the Fish's tail, as Grus was then depicted).Alnair Alnair is around 380 times as luminous and has over 3 times the diameter of the Sun. Lying 5 degrees west of Alnair, denoting the Crane's heart is Beta Gruis (the proper name is Tiaki), a red giant of spectral type M5III. It has a diameter of 0.8 astronomical units (AU) (if placed in the Solar System it would extend to the orbit of Venus) located around 170 light-years from Earth. It is a variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. An imaginary line drawn from the Great Square of Pegasus through Fomalhaut will lead to Alnair and Beta Gruis. Lying in the northwest corner of the constellation and marking the crane's eye is Gamma Gruis, a blue-white subgiant of spectral type B8III and magnitude 3.0 lying around 211 light-years from Earth. Also known as Al Dhanab, it has finished fusing its core hydrogen and has begun cooling and expanding, which will see it transform into a red giant. There are several naked-eye double stars in Grus. Forming a triangle with Alnair and Beta, Delta Gruis is an optical double whose components—Delta1 and Delta2—are separated by 45 arcseconds. Delta1 is a yellow giant of spectral type G7III and magnitude 4.0, 309 light-years from Earth, and may have its own magnitude 12 orange dwarf companion. Delta2 is a red giant of spectral type M4.5III and semiregular variable that ranges between magnitudes 3.99 and 4.2, located 325 light-years from Earth. It has around 3 times the mass and 135 times the diameter of our sun. Mu Gruis, composed of Mu1 and Mu2, is also an optical double—both stars are yellow giants of spectral type G8III around 2.5 times as massive as the Sun with surface temperatures of around 4900 K. Mu1 is the brighter of the two at magnitude 4.8 located around 275 light-years from Earth, while Mu2 the dimmer at magnitude 5.11 lies 265 light-years distant from Earth. Pi Gruis, an optical double with a variable component, is composed of Pi1 Gruis and Pi2. Pi1 is a semi-regular red giant of spectral type S5, ranging from magnitude 5.31 to 7.01 over a period of 191 days, and is around 532 light-years from Earth. One of the brightest S-class stars to Earth viewers, it has a companion star of apparent magnitude 10.9 with sunlike properties, being a yellow main sequence star of spectral type G0V. The pair make up a likely binary system. Pi2 is a giant star of spectral type F3III-IV located around 130 light-years from Earth, and is often brighter than its companion at magnitude 5.6. Marking the right wing is Theta Gruis, yet another double star, lying 5 degrees east of Delta1 and Delta2. RZ Gruis is a binary system of apparent magnitude 12.3 with occasional dimming to 13.4, whose components—a white dwarf and main sequence star—are thought to orbit each other roughly every 8.5 to 10 hours. It belongs to the UX Ursae Majoris subgroup of cataclysmic variable star systems, where material from the donor star is drawn to the white dwarf where it forms an accretion disc that remains bright and outshines the two component stars. The system is poorly understood, though the donor star has been calculated to be of spectral type F5V. These stars have spectra very similar to novae that have returned to quiescence after outbursts, yet they have not been observed to have erupted themselves. The American Association of Variable Star Observers recommends watching them for future events. CE Gruis (also known as Grus V-1) is a faint (magnitude 18–21) star system also composed of a white dwarf and donor star; in this case the two are so close they are tidally locked. Known as polars, material from the donor star does not form an accretion disc around the white dwarf, but rather streams directly onto it. Six star systems are thought to have planetary systems. Tau1 Gruis is a yellow star of magnitude 6.0 located around 106 light-years away. It may be a main sequence star or be just beginning to depart from the sequence as it expands and cools. In 2002 the star was found to have a planetary companion. HD 215456, HD 213240 and WASP-95 are yellow sunlike stars discovered to have two planets, a planet and a remote red dwarf, and a hot Jupiter, respectively; this last—WASP-95b—completes an orbit round its sun in a mere two days. Gliese 832 is a red dwarf of spectral type M1.5V and apparent magnitude 8.66 located only 16.1 light-years distant; hence it is one of the nearest stars to the Solar System. A Jupiter-like planet—Gliese 832 b—orbiting the red dwarf over a period of 9.4±0.4 years was discovered in 2008. WISE 2220−3628 is a brown dwarf of spectral type Y, and hence one of the coolest star-like objects known. It has been calculated as being around 26 light-years distant from Earth. In July 2019, astronomers reported finding a star, S5-HVS1, traveling , faster that any other star detected so far. The star is in the Grus constellation in the southern sky, and about 29,000 light-years from Earth, and may have been propelled out of the Milky Way galaxy after interacting with Sagittarius A*, the supermassive black hole at the center of the galaxy. Nicknamed the spare-tyre nebula, IC 5148 is a planetary nebula located around 1 degree west of Lambda Gruis. Around 3000 light-years distant, it is expanding at 50 kilometres a second, one of the fastest rates of expansion of all planetary nebulae. Northeast of Theta Gruis are four interacting galaxies known as the Grus Quartet. These galaxies are NGC 7552, NGC 7590, NGC 7599, and NGC 7582. The latter three galaxies occupy an area of sky only 10 arcminutes across and are sometimes referred to as the "Grus Triplet," although all four are part of a larger loose group of galaxies called the IC 1459 Grus Group. NGC 7552 and 7582 are exhibiting high starburst activity; this is thought to have arisen because of the tidal forces from interacting. Located on the border of Grus with Piscis Austrinus, IC 1459 is a peculiar E3 giant elliptical galaxy. It has a fast counterrotating stellar core, and shells and ripples in its outer region. The galaxy has an apparent magnitude of 11.9 and is around 80 million light-years distant. NGC 7424 is a barred spiral galaxy with an apparent magnitude of 10.4. located around 4 degrees west of the Grus Triplet. Approximately 37.5 million light-years distant, it is about 100,000 light-years in diameter, has well defined spiral arms and is thought to resemble the Milky Way. Two ultraluminous X-ray sources and one supernova have been observed in NGC 7424. SN 2001ig was discovered in 2001 and classified as a Type IIb supernova, one that initially showed a weak hydrogen line in its spectrum, but this emission later became undetectable and was replaced by lines of oxygen, magnesium and calcium, as well as other features that resembled the spectrum of a Type Ib supernova. A massive star of spectral type F, A or B is thought to be the surviving binary companion to SN 2001ig, which was believed to have been a Wolf–Rayet star. Located near Alnair is NGC 7213, a face-on type 1 Seyfert galaxy located approximately 71.7 million light-years from Earth. It has an apparent magnitude of 12.1. Appearing undisturbed in visible light, it shows signs of having undergone a collision or merger when viewed at longer wavelengths, with disturbed patterns of ionized hydrogen including a filament of gas around 64,000 light-years long. It is part of a group of ten galaxies. NGC 7410 is a spiral galaxy discovered by British astronomer John Herschel during observations at the Cape of Good Hope in October 1834. The galaxy has a visual magnitude of 11.7 and is approximately 122 million light-years distant from Earth.
https://en.wikipedia.org/wiki?curid=12572
Galba Galba (; Servius Galba Caesar Augustus; ; 24 December 3 BC – 15 January AD 69) was Roman emperor from 68 to 69, the first emperor in the Year of the Four Emperors. He was known as Lucius Livius Galba Ocella prior to taking the throne as a result of his adoption by his stepmother, Livia Ocellina. The governor of Hispania at the time of the rebellion of Gaius Julius Vindex in Gaul, he seized the throne following Nero's suicide. Born into a wealthy family, Galba held at various times the offices of praetor, consul, and governor of the provinces Aquitania, Upper Germany, and Africa during the first half of the first century AD. He retired during the latter part of Claudius' reign but Nero later granted him the governorship of Hispania. Taking advantage of the defeat of Vindex's rebellion and Nero's suicide, he became emperor with the support of the Praetorian Guard. His physical weakness and general apathy led to him being dominated by favorites. Unable to gain popularity with the people or maintain the support of the Praetorian Guard, Galba was murdered by Otho, who then became emperor. Galba was not related to any of the emperors of the Julio-Claudian dynasty, but he was a member of a distinguished noble family. The origin of the cognomen Galba is uncertain. Suetonius offers a number of possible explanations; the first member of the gens Sulpicia to bear the name might have gotten the name from the term "galba", which the Romans used to describe the Gauls, or after an insect called "galbae". One of Galba's ancestors had been consul in 200 BC, and another of his ancestors was consul in 144 BC; the later emperor's father and brother, both named Gaius, would hold the office in 5 BC and AD 22 respectively. Galba's grandfather was a historian and his son was a barrister whose first marriage was to Mummia Achaica, granddaughter of Quintus Lutatius Catulus and great-granddaughter of Lucius Mummius Achaicus; Galba prided himself on his descent from his great-grandfather Catulus. According to Suetonius, he fabricated a genealogy of paternal descent from the god Jupiter and maternal descent from the legendary Pasiphaë, wife of Minos. Reportedly, Galba was distantly related to Livia to whom he had much respect and in turn by whom he was advanced in his career; in her will she left him fifty million sesterces; Emperor Tiberius however cheated Galba by reducing the amount to five hundred thousand sesterces and never even paid Galba the reduced amount. Servius Sulpicius Galba was born near Terracina on 24 December 3 BC. His elder brother Gaius fled from Rome and committed suicide because the emperor Tiberius would not allow him to control a Roman province. Livia Ocellina became the second wife of Galba's father, whom she may have married because of his wealth; he was short and hunchbacked. Ocellina adopted Galba, and he took the name Lucius Livius Galba Ocella. Galba had a sexual appetite for males, whom he preferred over females; according to Suetonius, "he was more inclined to … the hard bodied and those past their prime". Nevertheless, he married a woman named Aemilia Lepida and had two sons. Aemilia and their sons died during the early years of the reign of Claudius (r. 41–54). Galba would remain a widower for the rest of his life. Galba became praetor in about 30, then governor of Aquitania for about a year, then consul in 33. In 39 the emperor Caligula learned of a plot against himself in which Gnaeus Cornelius Lentulus Gaetulicus, the general of the Upper German legions, was an important figure; Caligula installed Galba in the post held by Gaetulicus. According to one report Galba ran alongside Caligula's chariot for twenty miles. As commander of the legions of Upper Germany, Galba gained a reputation as a disciplinarian. Suetonius writes that Galba was advised to take the throne following the assassination of Caligula in 41, but loyally served Caligula's uncle and successor Claudius (r. 41–54); this story may simply be fictional. Galba was appointed as governor of Africa in 44 or 45. He retired at an uncertain time during the reign of Claudius, possibly in 49. He was recalled in 59 or 60 by the emperor Nero (r. 54–68) to govern Hispania. A rebellion against Nero was orchestrated by Gaius Julius Vindex in Gaul on the anniversary of the death of Nero's mother, Agrippina the Younger, in 68. Shortly afterwards Galba, in rebellion against Nero, rejected the title "General of Caesar" in favor of "General of the Senate and People of Rome". He was supported by the imperial official Tigellinus. On 8 June 68 another imperial official, Nymphidius Sabinus, falsely announced to the Praetorian Guard that Nero had fled to Egypt, and the Senate proclaimed Galba emperor. Nero then committed assisted suicide with help from his secretary. Upon becoming emperor, Galba was faced by the rebellion of Nymphidius Sabinus, who had his own aspirations for the imperial throne. However, Sabinus was killed by the Praetorians before he could take the throne. While Galba was arriving to Rome with the Lusitanian governor Marcus Salvius Otho, his army was attacked by a legion that had been organized by Nero; a number of Galba's troops were killed in the fighting. Galba, who suffered from chronic gout by the time he came to the throne, was advised by a corrupt group which included the Spanish general Titus Vinius, the praetorian prefect Cornelius Laco, and Icelus, a freedman of Galba. Galba seized the property of Roman citizens, disbanded the German legions, and did not pay the Praetorians and the soldiers who fought against Vindex. These actions caused him to become unpopular. On 1 January 69, the day Galba and Vinius took the office of consul, the fourth and twenty-second legions of Upper Germany refused to swear loyalty to Galba. They toppled his statues, demanding that a new emperor be chosen. On the following day, the soldiers of Lower Germany also refused to swear their loyalty and proclaimed the governor of the province, Aulus Vitellius, as emperor. Galba tried to ensure his authority as emperor was recognized by adopting the nobleman Lucius Calpurnius Piso Licinianus as his successor. Nevertheless, Galba was killed by the Praetorians on 15 January. According to Suetonius, Galba put on a linen corset although remarking It was little protection against so many Swords; when a soldier claimed to have killed Otho, Galba snapped "On what authority?". He was lured out to the scene of his assassination by a false report of the conspirators. Galba either tried to buy his life with a promise of the withheld bounty or asked that he be beheaded. The only help for him was a centurion in the Praetorian Guard named Sempronius Densus who was killed trying to defend Galba with a pugio; 120 persons later petitioned Otho that they had killed Galba; they would be executed by Vitellius. A company of German soldiers to whom he had once done a kindness rushed to help him; however they took a wrong turn and arrived too late. He was killed near the Curtius Lake. Vinius tried to run away, calling out that Otho had not ordered him killed, but was run through with a spear. Laco was banished to an island where he was later murdered by soldiers of Otho. Icelus was publicly executed. Piso was also killed; his head along with Galba's and Vinius' were placed on poles and Otho was then acclaimed as emperor.Galba's Head was brought by a soldier to Otho's camp where camp boys mocked it on a lance - Galba had angered them previously by remarking his vigor was still unimpeded. Vinius' head was sold to his daughter for 2500 drachmas; Piso's head was given to his wife. Galba's head was bought for 100 gold Pieces by a freeman who threw it at Sessorium where his master Patrobius Neronianus had been killed by Galba. The body of Galba was taken up by Priscus Helvidius with the permission of Otho; at night Galba's steward Argivus took both the head and body to a tomb in Galba's Private Gardens on the Aurelian Way. Seutionius wrote the following descriptions of Galba's character and physical description: "Even before he reached middle life, he persisted in keeping up an old and forgotten custom of his country, which survived only in his own household, of having his freedmen and slaves appear before him twice a day in a body, greeting him in the morning and bidding him farewell at evening, one by one" "His double reputation for cruelty and avarice had gone before him; men said that he had punished the cities of the Spanish and Gallic provinces which had hesitated about taking sides with him by heavier taxes and some even by the razing of their walls, putting to death the governors and imperial deputies along with their wives and children. Further, that he had melted down a golden crown of fifteen pounds weight, which the people of Tarraco had taken from their ancient temple of Jupiter and presented to him, with orders that the three ounces which were found lacking be exacted from them. This reputation was confirmed and even augmented immediately on his arrival in the city. For having compelled some marines whom Nero had made regular soldiers to return to their former position as rowers, upon their refusing and obstinately demanding an eagle and standards, he not only dispersed them by a cavalry charge, but even decimated them. He also disbanded a cohort of Germans, whom the previous Caesars had made their body-guard and had found absolutely faithful in many emergencies, and sent them back to their native country without any rewards, alleging that they were more favourably inclined towards Gnaeus Dolabella, near whose gardens they had their camp. The following tales too were told in mockery of him, whether truly or falsely: that when an unusually elegant dinner was set before him, he groaned aloud; that when his duly appointed steward presented his expense account, he handed him a dish of beans in return for his industry and carefulness; and that when the flute player Canus greatly pleased him, he presented him with five denarii, which he took from his own purse with his own hand. Accordingly his coming was not so welcome as it might have been, and this was apparent at the first performance in the theatre; for when the actors of an Atellan farce began the familiar lines "Here comes Onesimus from his farm" all the spectators at once finished the song in chorus and repeated it several times with appropriate gestures, beginning with that verse. Thus his popularity and prestige were greater when he won, than while he ruled the empire, though he gave many proofs of being an excellent prince; but he was by no means so much loved for those qualities as he was hated for his acts of the opposite character. " Particularly bad was his becoming under the influence of Vinius; Laco and Icelus:"...To these brigands, each with his different vice, he so entrusted and handed himself over as their tool, that his conduct was far from consistent; for now he was more exacting and niggardly, and now more extravagant and reckless than became a prince chosen by the people and of his time of life. He condemned to death distinguished men of both orders on trivial suspicions without a trial. He rarely granted Roman citizenship, and the privileges of threefold paternity to hardly one or two, and even to those only for a fixed and limited time. When the jurors petitioned that a sixth division be added to their number, he not only refused, but even deprived them of the privilege granted by Claudius, of not being summoned for court duty in winter and at the beginning of the year." In regard to his appointment of Vitellius to Lower Germany: "Galba surprised everyone by sending him to Lower Germany. Some think that it was due to Titus Vinius, who had great influence at the time, and whose friendship Vitellius had long since won through their common support of the Blues. But since Galba openly declared that no men were less to be feared than those who thought of nothing but eating, and that Vitellius's bottomless gullet might be filled from the resources of the province, it is clear to anyone that he was chosen rather through contempt than favour." "He was of average height, very bald, with blue eyes and a hooked nose. His hands and feet were so distorted by gout that he could not endure a shoe for long, unroll a book, or even hold one. The flesh on his right side too had grown out and hung down to such an extent, that it could with difficulty be held in place by a bandage. It is said that he was a heavy eater and in winter time was in the habit of taking food even before daylight, while at dinner he helped himself so lavishly that he would have the leavings which remained in a heap before him passed along and distributed among the attendants who waited on him... He met his end in the seventy-third year of his age and the seventh month of his reign. The senate, as soon as it was allowed to do so, voted him a statue standing upon a column adorned with the beaks of ships, in the part of the Forum where he was slain; but Vespasian annulled this decree, believing that Galba had sent assassins from Spain to Judaea, to take his life." Tacitus (Histories 1.49) comments on the character of Galba: "He seemed too great to be a subject so long as he was subject, and all would have agreed that he was equal to the imperial office if he had never held it."
https://en.wikipedia.org/wiki?curid=12576
George Stephenson George Stephenson (9 June 1781 – 12 August 1848) was a British civil engineer and mechanical engineer. Renowned as the "Father of Railways", Stephenson was considered by the Victorians a great example of diligent application and thirst for improvement. Self-help advocate Samuel Smiles particularly praised his achievements. His chosen rail gauge, sometimes called 'Stephenson gauge', was the basis for the standard gauge used by most of the world's railways. Pioneered by Stephenson, rail transport was one of the most important technological inventions of the 19th century and a key component of the Industrial Revolution. Built by George and his son Robert's company Robert Stephenson and Company, the "Locomotion" No. 1 is the first steam locomotive to carry passengers on a public rail line, the Stockton and Darlington Railway in 1825. George also built the first public inter-city railway line in the world to use locomotives, the Liverpool and Manchester Railway, which opened in 1830. George Stephenson was born on 9 June 1781 in Wylam, Northumberland, which is 9 miles (15 km) west of Newcastle upon Tyne. He was the second child of Robert and Mabel Stephenson, neither of whom could read or write. Robert was the fireman for Wylam Colliery pumping engine, earning a very low wage, so there was no money for schooling. At 17, Stephenson became an engineman at Water Row Pit in Newburn nearby. George realised the value of education and paid to study at night school to learn reading, writing and arithmetic – he was illiterate until the age of 18. In 1801 he began work at Black Callerton Colliery south of Ponteland as a 'brakesman', controlling the winding gear at the pit. In 1802 he married Frances Henderson and moved to Willington Quay, east of Newcastle. There he worked as a brakesman while they lived in one room of a cottage. George made shoes and mended clocks to supplement his income. Their first child Robert was born in 1803, and in 1804 they moved to Dial Cottage at West Moor, near Killingworth where George worked as a brakesman at Killingworth Pit. Their second child, a daughter, was born in July 1805. She was named Frances after her mother. The child died after just three weeks and was buried in St Bartholomew's Church, Long Benton north of Newcastle. In 1806 George's wife Frances died of consumption (tuberculosis). She was buried in the same churchyard as their daughter on the 16th May 1806, though sadly the location of the grave is lost. George decided to find work in Scotland and left Robert with a local woman while he went to work in Montrose. After a few months he returned, probably because his father was blinded in a mining accident. He moved back into a cottage at West Moor and his unmarried sister Eleanor moved in to look after Robert. In 1811 the pumping engine at High Pit, Killingworth was not working properly and Stephenson offered to improve it. He did so with such success that he was promoted to enginewright for the collieries at Killingworth, responsible for maintaining and repairing all the colliery engines. He became an expert in steam-driven machinery. In 1815, aware of the explosions often caused in mines by naked flames, Stephenson began to experiment with a safety lamp that would burn in a gaseous atmosphere without causing an explosion. At the same time, the eminent scientist and Cornishman Humphry Davy was also looking at the problem. Despite his lack of scientific knowledge, Stephenson, by trial and error, devised a lamp in which the air entered via tiny holes, through which the flames of the lamp could not pass. A month before Davy presented his design to the Royal Society, Stephenson demonstrated his own lamp to two witnesses by taking it down Killingworth Colliery and holding it in front of a fissure from which firedamp was issuing. The two designs differed; Davy's lamp was surrounded by a screen of gauze, whereas Stephenson's prototype lamp had a perforated plate contained in a glass cylinder. For his invention Davy was awarded £2,000, whilst Stephenson was accused of stealing the idea from Davy, because he was not seen as an adequate scientist who could have produced the lamp by any approved scientific method. Stephenson, having come from the North-East, spoke with a broad Northumberland accent and not the 'Language of Parliament,' which made him seem lowly. Realizing this, he made a point of educating his son Robert in a private school, where he was taught to speak in Standard English with a Received Pronunciation accent. It was due to this, in their future dealings with Parliament, that it became clear that the authorities preferred Robert to his father. A local committee of enquiry gathered in support of Stephenson, exonerated him, proved he had been working separately to create the 'Geordie Lamp', and awarded him £1,000, but Davy and his supporters refused to accept their findings, and would not see how an uneducated man such as Stephenson could come up with the solution he had. In 1833 a House of Commons committee found that Stephenson had equal claim to having invented the safety lamp. Davy went to his grave believing that Stephenson had stolen his idea. The Stephenson lamp was used almost exclusively in North East England, whereas the Davy lamp was used everywhere else. The experience gave Stephenson a lifelong distrust of London-based, theoretical, scientific experts. In his book "George and Robert Stephenson", the author L.T.C. Rolt relates that opinion varied about the two lamps' efficiency: that the Davy Lamp gave more light, but the Geordie Lamp was thought to be safer in a more gaseous atmosphere. He made reference to an incident at Oaks Colliery in Barnsley where both lamps were in use. Following a sudden strong influx of gas the tops of all the Davy Lamps became red hot (which had in the past caused an explosion, and in so doing risked another), whilst all the Geordie Lamps simply went out. There is a theory that it was Stephenson who indirectly gave the name of Geordies to the people of the North East of England. By this theory, the name of the Geordie Lamp attached to the North East pit men themselves. By 1866 any native of Newcastle upon Tyne could be called a Geordie. Cornishman Richard Trevithick is credited with the first realistic design for a steam locomotive in 1802. Later, he visited Tyneside and built an engine there for a mine-owner. Several local men were inspired by this, and designed their own engines. Stephenson designed his first locomotive in 1814, a travelling engine designed for hauling coal on the Killingworth wagonway named "Blücher" after the Prussian general Gebhard Leberecht von Blücher (It was suggested the name sprang from Blücher's rapid march of his army in support of Wellington at Waterloo). "Blücher" was modelled on Matthew Murray’s locomotive "Willington", which George studied at Kenton and Coxlodge colliery on Tyneside, and was constructed in the colliery workshop behind Stephenson's home, Dial Cottage, on Great Lime Road. The locomotive could haul 30 tons of coal up a hill at , and was the first successful flanged-wheel adhesion locomotive: its traction depended on contact between its flanged wheels and the rail. Altogether, Stephenson is said to have produced 16 locomotives at Killingworth, although it has not proved possible to produce a convincing list of all 16. Of those identified, most were built for use at Killingworth or for the Hetton colliery railway. A six-wheeled locomotive was built for the Kilmarnock and Troon Railway in 1817 but was withdrawn from service because of damage to the cast-iron rails. Another locomotive was supplied to Scott's Pit railroad at Llansamlet, near Swansea, in 1819 but it too was withdrawn, apparently because it was under-boilered and again caused damage to the track. The new engines were too heavy to run on wooden rails or plate-way, and iron edge rails were in their infancy, with cast iron exhibiting excessive brittleness. Together with William Losh, Stephenson improved the design of cast-iron edge rails to reduce breakage; rails were briefly made by Losh, Wilson and Bell at their Walker ironworks. According to Rolt, Stephenson managed to solve the problem caused by the weight of the engine on the primitive rails. He experimented with a steam spring (to 'cushion' the weight using steam pressure acting on pistons to support the locomotive frame), but soon followed the practice of 'distributing' weight by using a number of wheels or bogies. For the Stockton and Darlington Railway Stephenson used wrought-iron malleable rails that he had found satisfactory, notwithstanding the financial loss he suffered by not using his own patented design. Stephenson was hired to build the 8-mile (13-km) Hetton colliery railway in 1820. He used a combination of gravity on downward inclines and locomotives for level and upward stretches. This, the first railway using no animal power, opened in 1822. This line used a gauge of which Stephenson had used before at the Killingworth wagonway. Other locomotives include: In 1821, a parliamentary bill was passed to allow the building of the Stockton and Darlington Railway (S&DR). The railway connected collieries near Bishop Auckland to the River Tees at Stockton, passing through Darlington on the way. The original plan was to use horses to draw coal carts on metal rails, but after company director Edward Pease met Stephenson, he agreed to change the plans. Stephenson surveyed the line in 1821, and assisted by his eighteen-year-old son Robert, construction began the same year. A manufacturer was needed to provide the locomotives for the line. Pease and Stephenson had jointly established a company in Newcastle to manufacture locomotives. It was set up as Robert Stephenson and Company, and George's son Robert was the managing director. A fourth partner was Michael Longridge of Bedlington Ironworks. On an early trade card, Robert Stephenson & Co was described as "Engineers, Millwrights & Machinists, Brass & Iron Founders". In September 1825 the works at Forth Street, Newcastle completed the first locomotive for the railway: originally named "Active", it was renamed "Locomotion" and was followed by "Hope", "Diligence" and "Black Diamond". The Stockton and Darlington Railway opened on 27 September 1825. Driven by Stephenson, "Locomotion" hauled an 80-ton load of coal and flour in two hours, reaching a speed of on one stretch. The first purpose-built passenger car, "Experiment", was attached and carried dignitaries on the opening journey. It was the first time passenger traffic had been run on a steam locomotive railway. The rails used for the line were wrought-iron, produced by John Birkinshaw at the Bedlington Ironworks. Wrought-iron rails could be produced in longer lengths than cast-iron and were less liable to crack under the weight of heavy locomotives. William Losh of Walker Ironworks thought he had an agreement with Stephenson to supply cast-iron rails, and Stephenson's decision caused a permanent rift between them. The gauge Stephenson chose for the line was which subsequently was adopted as the standard gauge for railways, not only in Britain, but throughout the world. Stephenson had ascertained by experiments at Killingworth that half the power of the locomotive was consumed by a gradient as little as 1 in 260. He concluded that railways should be kept as level as possible. He used this knowledge while working on the Bolton and Leigh Railway, and the Liverpool and Manchester Railway (L&MR), executing a series of difficult cuttings, embankments and stone viaducts to level their routes. Defective surveying of the original route of the L&MR caused by hostility from some affected landowners meant Stephenson encountered difficulty during Parliamentary scrutiny of the original bill, especially under cross-examination by Edward Hall Alderson. The bill was rejected and a revised bill for a new alignment was submitted and passed in a subsequent session. The revised alignment presented the problem of crossing Chat Moss, an apparently bottomless peat bog, which Stephenson overcame by unusual means, effectively floating the line across it. The method he used was similar to that used by John Metcalf who constructed many miles of road across marshes in the Pennines, laying a foundation of heather and branches, which became bound together by the weight of the passing coaches, with a layer of stones on top. As the L&MR approached completion in 1829, its directors arranged a competition to decide who would build its locomotives, and the Rainhill Trials were run in October 1829. Entries could weigh no more than six tons and had to travel along the track for a total distance of . Stephenson's entry was "Rocket", and its performance in winning the contest made it famous. George's son Robert had been working in South America from 1824 to 1827 and returned to run the Forth Street Works while George was in Liverpool overseeing the construction of the line. Robert was responsible for the detailed design of "Rocket", although he was in constant postal communication with his father, who made many suggestions. One significant innovation, suggested by Henry Booth, treasurer of the L&MR, was the use of a fire-tube boiler, invented by French engineer Marc Seguin that gave improved heat exchange. The opening ceremony of the L&MR, on 15 September 1830, drew luminaries from the government and industry, including the Prime Minister, the Duke of Wellington. The day started with a procession of eight trains setting out from Liverpool. The parade was led by "Northumbrian" driven by George Stephenson, and included "Phoenix" driven by his son Robert, "North Star" driven by his brother Robert and "Rocket" driven by assistant engineer Joseph Locke. The day was marred by the death of William Huskisson, the Member of Parliament for Liverpool, who was struck by "Rocket". Stephenson evacuated the injured Huskisson to Eccles with a train, but he died from his injuries. Despite the tragedy, the railway was a resounding success. Stephenson became famous, and was offered the position of chief engineer for a wide variety of other railways. 1830 also saw the grand opening of the skew bridge in Rainhill over the Liverpool and Manchester Railway. The bridge was the first to cross any railway at an angle. It required the structure to be constructed as two flat planes (overlapping in this case by ) between which the stonework forms a parallelogram shape when viewed from above. It has the effect of flattening the arch and the solution is to lay the bricks forming the arch at an angle to the abutments (the piers on which the arches rest). The technique, which results in a spiral effect in the arch masonry, provides extra strength in the arch to compensate for the angled abutments. The bridge is still in use at Rainhill station, and carries traffic on the A57 (Warrington Road). The bridge is a listed structure. George Stephenson moved to the parish of Alton Grange (now part of Ravenstone) in Leicestershire in 1830, originally to consult on the Leicester and Swannington Railway, a line primarily proposed to take coal from the western coal fields of the county to Leicester. The promoters of the line Mr William Stenson and Mr John Ellis, had difficulties in raising the necessary capital as the majority of local wealth had been invested in canals. Realising the potential and need for the rail link Stephenson himself invested £2,500 and raised the remaining capital through his network of connections in Liverpool. His son Robert was made chief engineer with the first part of the line opening in 1832. During this same period the Snibston estate in Leicestershire came up for auction, it lay adjoining the proposed Swannington to Leicester route and was believed to contain valuable coal reserves. Stephenson realising the financial potential of the site, given its proximity to the proposed rail link and the fact that the manufacturing town of Leicester was then being supplied coal by canal from Derbyshire, bought the estate. Employing a previously used method of mining in the midlands called tubbing to access the deep coal seams, his success could not have been greater. Stephenson’s coal mine delivered the first rail cars of coal into Leicester dramatically reducing the price of coal and saving the city some £40,000 per annum. Stephenson remained at Alton Grange until 1838 before moving to Tapton House in Derbyshire. The next ten years were the busiest of Stephenson's life as he was besieged with requests from railway promoters. Many of the first American railroad builders came to Newcastle to learn from Stephenson and the first dozen or so locomotives utilised there were purchased from the Stephenson shops. Stephenson's conservative views on the capabilities of locomotives meant he favoured circuitous routes and civil engineering that were more costly than his successors thought necessary. For example, rather than the West Coast Main Line taking the direct route favoured by Joseph Locke over Shap between Lancaster and Carlisle, Stephenson was in favour of a longer sea-level route via Ulverston and Whitehaven. Locke's route was built. Stephenson tended to be more casual in estimating costs and paperwork in general. He worked with Joseph Locke on the Grand Junction Railway with half of the line allocated to each man. Stephenson's estimates and organising ability proved inferior to those of Locke and the board's dissatisfaction led to Stephenson's resignation causing a rift between them which was never healed. Despite Stephenson's loss of some routes to competitors due to his caution, he was offered more work than he could cope with, and was unable to accept all that was offered. He worked on the North Midland line from Derby to Leeds, the York and North Midland line from Normanton to York, the Manchester and Leeds, the Birmingham and Derby, the Sheffield and Rotherham among many others. Stephenson became a reassuring name rather than a cutting-edge technical adviser. He was the first president of the Institution of Mechanical Engineers on its formation in 1847. By this time he had settled into semi-retirement, supervising his mining interests in Derbyshire – tunnelling for the North Midland Railway revealed coal seams, and Stephenson put money into their exploitation. George first courted Elizabeth (Betty) Hindmarsh, a farmer's daughter from Black Callerton, whom he met secretly in her orchard. Her father refused marriage because of Stephenson's lowly status as a miner. George next paid attention to Anne Henderson where he lodged with her family, but she rejected him and he transferred his attentions to her sister Frances (Fanny), who was nine years his senior. George and Fanny married at Newburn Church on 28 November 1802. They had two children Robert (1803) and Fanny (1805) but the latter died within months. George's wife died, probably of tuberculosis, the year after. While George was working in Scotland, Robert was brought up by a succession of neighbours and then by George's unmarried sister Eleanor (Nelly), who lived with them in Killingworth on George's return. On 29 March 1820, George (now considerably wealthier) married Betty Hindmarsh at Newburn. The marriage seems to have been happy, but there were no children and Betty died on 3 August 1845. On 11 January 1848, at St John's Church in Shrewsbury, Shropshire, George married for the third time, to Ellen Gregory, another farmer's daughter originally from Bakewell in Derbyshire, who had been his housekeeper. Seven months after his wedding, George contracted pleurisy and died, aged 67, at noon on 12 August 1848 at Tapton House in Chesterfield, Derbyshire. He was buried at Holy Trinity Church, Chesterfield, alongside his second wife. Described by Rolt as a generous man, Stephenson financially supported the wives and families of several who had died in his employment, due to accident or misadventure, some within his family, and some not. He was also a keen gardener throughout his life; during his last years at Tapton House, he built hothouses in the estate gardens, growing exotic fruits and vegetables in a 'not too friendly' rivalry with Joseph Paxton, head gardener at nearby Chatsworth House, twice beating the master of the craft. George Stephenson had two children. His son Robert was born on 16 October 1803. Robert married Frances Sanderson, daughter of a City of London professional John Sanderson, on 17 June 1829. Robert died in 1859 having no children. Robert Stephenson expanded on the work of his father and became a major railway engineer himself. Abroad, Robert was involved in the Alexandria–Cairo railway that later connected with the Suez Canal. George Stephenson's daughter was born in 1805 but died within weeks of her birth. Descendants of the wider Stephenson family continue to live in Wylam (Stephenson's birthplace) today. Also relatives connected by his marriage live in Derbyshire. Some descendants later emigrated to Perth, Australia, with later generations remaining to this day. Britain led the world in the development of railways which acted as a stimulus for the Industrial Revolution by facilitating the transport of raw materials and manufactured goods. George Stephenson, with his work on the Stockton and Darlington Railway and the Liverpool and Manchester Railway, paved the way for the railway engineers who followed, such as his son Robert, his assistant Joseph Locke who carried out much work on his own account and Isambard Kingdom Brunel. Stephenson was farsighted in realising that the individual lines being built would eventually be joined together, and would need to have the same gauge. The standard gauge used throughout much of the world is due to him. In 2002, Stephenson was named in the BBC's television show and list of the "100 Greatest Britons" following a UK-wide vote, placing at no. 65. The Victorian self-help advocate Samuel Smiles had published his first biography of George Stephenson in 1857, and although attacked as biased in the favour of George at the expense his rivals as well as his son, it was popular and 250,000 copies were sold by 1904. The Band of Hope were selling biographies of George in 1859 at a penny a sheet, and at one point there was a suggestion to move George's body to Westminster Abbey. The centenary of George's birth was celebrated in 1881 at Crystal Palace by 15,000 people, and it was George who was featured on the reverse of the Series E five pound note issued by the Bank of England between 1990 and 2003. The Stephenson Railway Museum in North Shields is named after George and Robert Stephenson. George Stephenson's Birthplace is an 18th-century historic house museum in the village of Wylam, and is operated by the National Trust. Dial Cottage at West Moor, his home from 1804, remains but the museum that once operated here is shut. Chesterfield Museum in Chesterfield, Derbyshire, has a gallery of Stephenson memorabilia, including straight thick glass tubes he invented for growing straight cucumbers. The museum is in the Stephenson Memorial Hall not far from both Stephenson's final home at Tapton House and Holy Trinity Church within which is his vault. In Liverpool, where he lived at 34 Upper Parliament Street, a City of Liverpool Heritage Plaque is situated next to the front door. George Stephenson College, founded in 2001 on the University of Durham's Queen's Campus in Stockton-on-Tees, is named after him. Also named after him and his son is George Stephenson High School in Killingworth, Stephenson Memorial Primary School in Howdon, the Stephenson Railway Museum in North Shields and the Stephenson Locomotive Society. The Stephenson Centre, an SEBD Unit of Beaumont Hill School in Darlington, is named after him. His last home in Tapton, Chesterfield is now part of Chesterfield College and is called Tapton House Campus. As a tribute to his life and works, a bronze statue of Stephenson was unveiled at Chesterfield railway station (in the town where Stephenson spent the last ten years of his life) on 28 October 2005, marking the completion of improvements to the station. At the event a full-size working replica of the "Rocket" was on show, which then spent two days on public display at the Chesterfield Market Festival. A statue of him dressed in classical robes stands in Neville Street, Newcastle, facing the buildings that house the Literary and Philosophical Society of Newcastle upon Tyne and the North of England Institute of Mining and Mechanical Engineers, near Newcastle railway station. The statue was sculpted in 1862 by John Graham Lough and is listed Grade II. From 1990 until 2003, Stephenson's portrait appeared on the reverse of Series E £5 notes issued by the Bank of England. Stephenson's face is shown alongside an engraving of the "Rocket" steam engine and the Skerne Bridge on the Stockton to Darlington Railway. In popular media, Stephenson was portrayed by actor Gawn Grainger on television in the 1985 "Doctor Who" serial "The Mark of the Rani".
https://en.wikipedia.org/wiki?curid=12578
Glass Glass is a non-crystalline, often transparent amorphous solid, that has widespread practical, technological, and decorative use in, for example, window panes, tableware, and optics. Glass is most often formed by rapid cooling (quenching) of the molten form; some glasses such as volcanic glass are naturally occurring. The most familiar, and historically the oldest, types of manufactured glass are "silicate glasses" based on the chemical compound silica (silicon dioxide, or quartz), the primary constituent of sand. Soda-lime glass, containing around 70% silica, account for around 90% of manufactured glass. The term "glass", in popular usage, is often used to refer only to this type of material, although silica-free glasses often have desirable properties for applications in modern communications technology. Some objects, such as drinking glasses and eyeglasses, are so commonly made of silicate-based glass that they are simply called by the name of the material. Although brittle, silicate glass is extremely durable, and many examples of glass fragments exist from early glass-making cultures. Archaeological evidence suggests glass-making dates back to at least 3,600 BCE in Mesopotamia, Egypt, or Syria. The earliest known glass objects were beads, perhaps created accidentally during metal-working or the production of faience. Due to its ease of formability into any shape, glass has been traditionally used for vessels: bowls, vases, bottles, jars and drinking glasses. In its most solid forms, it has also been used for paperweights and marbles. Glass can be coloured by adding metal salts or painted and printed with vitreous enamels, leading to its use in stained glass windows and other glass art objects. The refractive, reflective and transmission properties of glass make glass suitable for manufacturing optical lenses, prisms, and optoelectronics materials. Extruded glass fibres have application as optical fibres in communications networks, thermal insulating material when matted as glass wool so as to trap air, or in glass-fibre reinforced plastic (fibreglass). The standard definition of a "glass" (or vitreous solid) is a solid formed by rapid melt quenching. However, the term "glass" is often defined in a broader sense, to describe any non-crystalline (amorphous) solid that exhibits a glass transition when heated towards the liquid state. Glass is an amorphous solid. Although the atomic-scale structure of glass shares characteristics of the structure of a supercooled liquid, glass exhibits all the mechanical properties of a solid. As in other amorphous solids, the atomic structure of a glass lacks the long-range periodicity observed in crystalline solids. Due to chemical bonding constraints, glasses do possess a high degree of short-range order with respect to local atomic polyhedra. The notion that glass flows to an appreciable extent over extended periods of time is not supported by empirical research or theoretical analysis (see viscosity in solids). Laboratory measurements of room temperature glass flow do show a motion consistent with a material viscosity on the order of 1017–1018 Pa s. For melt quenching, if the cooling is sufficiently rapid (relative to the characteristic crystallization time) then crystallization is prevented and instead the disordered atomic configuration of the supercooled liquid is frozen into the solid state at Tg. The tendency for a material to form a glass while quenched is called glass-forming ability. This ability can be predicted by the rigidity theory. Generally, a glass exists in a structurally metastable state with respect to its crystalline form, although in certain circumstances, for example in atactic polymers, there is no crystalline analogue of the amorphous phase. Glass is sometimes considered to be a liquid due to its lack of a first-order phase transition where certain thermodynamic variables such as volume, entropy and enthalpy are discontinuous through the glass transition range. The glass transition may be described as analogous to a second-order phase transition where the intensive thermodynamic variables such as the thermal expansivity and heat capacity are discontinuous. Nonetheless, the equilibrium theory of phase transformations does not entirely hold for glass, and hence the glass transition cannot be classed as one of the classical equilibrium phase transformations in solids. Glass can form naturally from volcanic magma. Obsidian is a common volcanic glass with high silica (SiO2) content formed when felsic lava extruded from a volcano cools rapidly. Impactite is a form of glass formed by the impact of a meteorite, where Moldavite (found in central and eastern Europe), and Libyan desert glass (found in areas in the eastern Sahara, the deserts of eastern Libya and western Egypt) are notable examples. Vitrification of quartz can also occur when lightning strikes sand, forming hollow, branching rootlike structures called fulgurites. Trinitite is a glassy residue formed from the desert floor sand at the Trinity nuclear bomb test site. Edeowie glass, found in South Australia, is proposed to originate from Pleistocene grassland fires, lightning strikes, or hypervelocity impact by one or several asteroids or comets. Naturally occurring obsidian glass was used by Stone Age societies as it fractures along very sharp edges, making it ideal for cutting tools and weapons. Glassmaking dates back to at least 6000 years, long before humans had discovered how to smelt iron. Archaeological evidence suggests that the first true synthetic glass was made in Lebanon and the coastal north Syria, Mesopotamia or ancient Egypt. The earliest known glass objects, of the mid third millennium BCE, were beads, perhaps initially created as accidental by-products of metal-working (slags) or during the production of faience, a pre-glass vitreous material made by a process similar to glazing. Early glass was rarely transparent and often contained impurities and imperfections. The early glass have been declared as faience and the true glass did not appear in the same area until 15th century BC. Red orange glass beads excavated from the Indus Valley Civilization dated before 1700 BC as early as 1900 BC are earlier than sustained glass production which appeared around 1600 in mesopotamia and 1500 in Egypt. During the Late Bronze Age there was a rapid growth in glassmaking technology in Egypt and Western Asia. Archaeological finds from this period include coloured glass ingots, vessels, and beads. Much early glass production relied on grinding techniques borrowed from stone working meaning that glass was ground and carved in a cold state. The term "glass" developed in the late Roman Empire. It was in the Roman glassmaking centre at Trier, now in modern Germany, that the late-Latin term "glesum" originated, probably from a Germanic word for a transparent, lustrous substance. Glass objects have been recovered across the Roman Empire in domestic, funerary, and industrial contexts. Examples of Roman glass have been found outside of the former Roman Empire in China, the Baltics, the Middle East and India. The Romans perfected Cameo glass, produced by etching and carving through fused layers of different colours to produce a design in relief on the glass object. Glass was used extensively during the Middle Ages. Anglo-Saxon glass has been found across England during archaeological excavations of both settlement and cemetery sites. From the 10th-century onwards, glass was employed in stained glass windows of churches and cathedrals, with famous examples at Chartres Cathedral and the Basilica of Saint Denis. By the 14th-century, architects were designing buildings with walls of stained glass such as Sainte-Chapelle, Paris, (1203–1248) and the East end of Gloucester Cathedral. With the Renaissance, and a change in architectural style, the use of large stained glass windows became much less prevalent, although stained glass had a major revival with Gothic Revival architecture in the 19th century. During the 13th century, the island of Murano, Venice, became a centre for glass making, building on medieval techniques to produce colourful ornamental pieces in large quantities. Murano glass makers developed the exceptionally clear colourless glass cristallo, so called for its resemblance to natural crystal, and extensively used for windows, mirrors, ships' lanterns, and lenses. In the 13th, 14th, and 15th centuries, enamelling and gilding on glass vessels was perfected in Egypt and Syria. Towards the end of the 17th century Bohemia became an important region for glass-production, remaining so until the start of the 20th century. By the 17th century, glass was also being produced in England in the Venetian tradition. In around 1675, George Ravenscroft invented lead crystal glass, with cut glass becoming fashionable in the 18th century. Ornamental glass objects became an important art medium during the Art Nouveau period in the late 19th century. Throughout the 20th century, new mass production techniques led to the widespread availability and utility for bulk glass and its increased use as a building material and new applications of glass. In the 1920s a mould-etch process was developed, in which art was etched directly into the mould, so that each cast piece emerged from the mould with the image already on the surface of the glass. This reduced manufacturing costs and, combined with a wider use of coloured glass, led to cheap glassware in the 1930s, which later became known as Depression glass. In the 1950s, Pilkington Bros., England, developed the float glass process, producing high-quality distortion free flat sheets of glass by floating on molten tin. Modern multi-story buildings are frequently constructed with curtain walls made almost entirely of glass. Similarly, laminated glass has been widely applied to vehicles for windscreens. Optical glass for spectacles has been used since the Middle Ages. The production of lenses has become increasingly proficient, aiding astronomers as well as having other application in medicine and science. Glass is also employed as the aperture cover in many solar energy collectors. In the 21st century, glass manufacturers have developed different brands of chemically strengthened glass for widespread application in touchscreens for smartphones, tablet computers, and many other types of information appliances. These include Gorilla glass, developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation. Glass is in widespread use in optical systems due to its ability to refract, reflect, and transmit light following geometrical optics. The most common and oldest applications of glass in optics are as lenses, windows, mirrors, and prisms. The key optical properties refractive index, dispersion, and transmission, of glass are strongly dependent on chemical composition and, to a lesser degree, its thermal history. Optical glass typically has a refractive index of 1.4 to 2.4 and Abbe number, which characterises dispersion, of 15 to 100. Refractive index may be modified by high-density (refractive index increases) or low-density (refractive index decreases) additives. Glass transparency results from the absence of grain boundaries which diffusely scatter light in polycrystalline materials. Semi-opacity due to crystallization may be induced in many glasses by maintaining them for a long period at a temperature just insufficient to cause fusion. In this way, the crystalline, devitrified material, known as Réaumur's glass porcelain is produced. Although generally transparent to visible light, glasses may be opaque to other wavelengths of light. While silicate glasses are generally opaque to infrared wavelengths with a transmission cut-off at 4 μm, heavy-metal fluoride and chalcogenide glasses are transparent to infrared wavelengths of 7 to 18 μm, respectively. The addition of metallic oxides results in different coloured glasses as the metallic ions will absorb wavelengths of light corresponding to specific colours. In the manufacturing process, glasses can be poured, formed, extruded and moulded into forms ranging from flat sheets to highly intricate shapes. The finished product is brittle and will fracture, unless laminated or tempered to enhance durability. Glass is typically inert, resistant to chemical attack, and can mostly withstand the action of water, making it an ideal material for the manufacture of containers for foodstuffs and most chemicals. Nevertheless, although usually highly resistant to chemical attack, glass will corrode or dissolve under some conditions. The materials that make up a particular glass composition have an effect on how quickly the glass corrodes. Glasses containing a high proportion of alkali or alkaline earth elements are more susceptible to corrosion than other glass compositions. The density of glass varies with chemical composition with values ranging from for fused silica to for dense flint glass. Glass is stronger than most metals, with a theoretical tensile strength estimated at to due to its ability to undergo reversible compression without fracture. However, the presence of scratches, bubbles, and other microscopic flaws lead to a typical range of to in most commercial glasses. Several processes such as toughening can increase the strength of glass. Carefully drawn flawless glass fibres can be produced with strength of up to . The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries, the assumption being that the glass has exhibited the liquid property of flowing from one shape to another. This assumption is incorrect, as once solidified, glass stops flowing. Instead, glass manufacturing processes in the past produced sheets of non-uniform thickness leading to observed sagging and ripples in old windows. Silicon dioxide (SiO2) is a common fundamental constituent of glass. Fused quartz is a glass made from chemically-pure silica. It has very low thermal expansion and excellent resistance to thermal shock, being able to survive immersion in water while red hot, resists high temperatures (1000–1500 °C) and chemical weathering, and is very hard. It is also transparent to a wider spectral range than ordinary glass, extending from the visible further into both the UV and IR ranges, and is sometimes used where transparency to these wavelengths is necessary. Fused quartz is used for high-temperature applications such as furnace tubes, lighting tubes, melting crucibles, etc. However, its high melting temperature (1723°C) and viscosity make it difficult to work with. Therefore, normally, other substances (fluxes) are added to lower the melting temperature and simplify glass processing. Sodium carbonate (Na2CO3, "soda") is a common additive and acts to lowers the glass-transition temperature. However, Sodium silicate is water-soluble, so lime (CaO, calcium oxide, generally obtained from limestone), some magnesium oxide (MgO) and aluminium oxide (Al2O3) are other common components added to improve chemical durability. Soda-lime glasses (Na2O) + lime (CaO) + magnesia (MgO) + alumina (Al2O3) account for over 75% of manufactured glass, containing about 70 to 74% silica by weight. Soda-lime-silicate glass is transparent, easily formed, and most suitable for window glass and tableware. However, it has a high thermal expansion and poor resistance to heat. Soda-lime glass is typically used for windows, bottles, light bulbs, and jars. Borosilicate glasses (e.g. Pyrex, Duran) typically contain 5–13% boron trioxide (B2O3). Borosilicate glasses have fairly low coefficients of thermal expansion (7740 Pyrex CTE is 3.25/°C as compared to about 9/°C for a typical soda-lime glass). They are, therefore, less subject to stress caused by thermal expansion and thus less vulnerable to cracking from thermal shock. They are commonly used for e.g. labware, household cookware, and sealed beam car head lamps. The addition of lead(II) oxide into silicate glass lowers melting point and viscosity of the melt. The high density of Lead glass (silica + lead oxide (PbO) + potassium oxide (K2O) + soda (Na2O) + zinc oxide (ZnO) + alumina) results in a high electron density, and hence high refractive index, making the look of glassware more brilliant and causing noticeably more specular reflection and increased optical dispersion. Lead glass has a high elasticity, making the glassware more workable and giving rise to a clear "ring" sound when struck. However, lead glass cannot withstand high temperatures well. Lead oxide also facilitates solubility of other metal oxides and is used in colored glass. The viscosity decrease of lead glass melt is very significant (roughly 100 times in comparison with soda glass); this allows easier removal of bubbles and working at lower temperatures, hence its frequent use as an additive in vitreous enamels and glass solders. The high ionic radius of the Pb2+ ion renders it highly immobile and hinders the movement of other ions; lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda-lime glass (108.5 vs 106.5 Ω⋅cm, DC at 250 °C). Aluminosilicate glass typically contains 5-10% alumina (Al2O3). Aluminosilicate glass tends to be more difficult to melt and shape compared to borosilicate compositions, but has excellent thermal resistance and durability. Aluminosilicate glass is extensively used for fiberglass, used for making glass-reinforced plastics (boats, fishing rods, etc.), top-of-stove cookware, and halogen bulb glass. The addition of barium also increases the refractive index. Thorium oxide gives glass a high refractive index and low dispersion and was formerly used in producing high-quality lenses, but due to its radioactivity has been replaced by lanthanum oxide in modern eyeglasses. Iron can be incorporated into glass to absorb infrared radiation, for example in heat-absorbing filters for movie projectors, while cerium(IV) oxide can be used for glass that absorbs ultraviolet wavelengths. Fluorine lowers the dielectric constant of glass. Fluorine is highly electronegative and lowers the polarizability of the material. Fluoride silicate glasses are used in manufacture of integrated circuits as an insulator. Glass-ceramic materials contain both non-crystalline glass and crystalline ceramic phases. They are formed by controlled nucleation and partial crystallisation of a base glass by heat treatment. Crystalline grains are often embedded within a non-crystalline intergranular phase of grain boundaries. Glass-ceramics exhibit advantageous thermal, chemical, biological, and dielectric properties as compared to metals or organic polymers. The most commercially important property of glass-ceramics is their imperviousness to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking and industrial processes. The negative thermal expansion coefficient (CTE) of the crystalline ceramic phase can be balanced with the positive CTE of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net CTE near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. Fibreglass (also called glass fibre reinforced plastic, GRP) is a composite material made by reinforcing a plastic resin with glass fibres. It is made by melting glass and stretching the glass into fibres. These fibres are woven together into a cloth and left to set in a plastic resin. Fibreglass has the properties of being lightweight and corrosion resistant, and is a good insulator enabling its use as building insulation material and for electronic housing for consumer products. Fibreglass was originally used in the United Kingdom and United States during World War II to manufacture radomes. Uses of fibreglass include building and construction materials, boat hulls, car body parts, and aerospace composite materials. Glass-fibre wool is an excellent thermal and sound insulation material, commonly used in buildings (e.g. attic and cavity wall insulation), and plumbing (e.g. pipe insulation), and soundproofing. It is produced by forcing molten glass through a fine mesh by centripetal force, and breaking the extruded glass fibres into short lengths using a stream of high-velocity air. The fibres are bonded with an adhesive spray and the resulting wool mat is cut and packed in rolls or panels. Besides common silica-based glasses many other inorganic and organic materials may also form glasses, including metals, aluminates, phosphates, borates, chalcogenides, fluorides, germanates (glasses based on GeO2), tellurites (glasses based on TeO2), antimonates (glasses based on Sb2O3), arsenates (glasses based on As2O3), titanates (glasses based on TiO2), tantalates (glasses based on Ta2O5), nitrates, carbonates, plastics, acrylic, and many other substances. Some of these glasses (e.g. Germanium dioxide (GeO2, Germania), in many respects a structural analogue of silica, fluoride, aluminate, phosphate, borate, and chalcogenide glasses) have physico-chemical properties useful for their application in fibre-optic waveguides in communication networks and other specialized technological applications. Silica-free glasses may often have poor glass forming tendencies. Novel techniques, including containerless processing by aerodynamic levitation (cooling the melt whilst it floats on a gas stream) or splat quenching (pressing the melt between two metal anvils or rollers), may be used increase cooling rate, or reduce crystal nucleation triggers. In the past, small batches of amorphous metals with high surface area configurations (ribbons, wires, films, etc.) have been produced through the implementation of extremely rapid rates of cooling. Amorphous metal wires have been produced by sputtering molten metal onto a spinning metal disk. More recently a number of alloys have been produced in layers with thickness exceeding 1 millimeter. These are known as bulk metallic glasses (BMG). Liquidmetal Technologies sell a number of zirconium-based BMGs. Batches of amorphous steel have also been produced that demonstrate mechanical properties far exceeding those found in conventional steel alloys. Experimental evidence indicates that the system Al-Fe-Si may undergo a "first-order transition" to an amorphous form (dubbed "q-glass") on rapid cooling from the melt. Transmission electron microscopy (TEM) images indicate that q-glass nucleates from the melt as discrete particles with a uniform spherical growth in all directions. While x-ray diffraction reveals the isotropic nature of q-glass, a nucleation barrier exists implying an interfacial discontinuity (or internal surface) between the glass and melt phases. Important polymer glasses include amorphous and glassy pharmaceutical compounds. These are useful because the solubility of the compound is greatly increased when it is amorphous compared to the same crystalline composition. Many emerging pharmaceuticals are practically insoluble in their crystalline forms. Many polymer thermoplastics familiar from everyday use are glasses. For many applications, like glass bottles or eyewear, polymer glasses (acrylic glass, polycarbonate or polyethylene terephthalate) are a lighter alternative to traditional glass. Molecular liquids, electrolytes, molten salts, and aqueous solutions are mixtures of different molecules or ions that do not form a covalent network but interact only through weak van der Waals forces or through transient hydrogen bonds. In a mixture of three or more ionic species of dissimilar size and shape, crystallization can be so difficult that the liquid can easily be supercooled into a glass. Examples include LiCl:"R"H2O (a solution of lithium chloride salt and water molecules) in the composition range 4 European Workshop on Glasses and Gels. sugar glass, or Ca0.4K0.6(NO3)1.4. Glass electrolytes in the form of Ba-doped Li-glass and Ba-doped Na-glass have been proposed as solutions to problems identified with organic liquid electrolytes used in modern lithium-ion battery cells. Following the glass batch preparation and mixing, the raw materials are transported to the furnace. Soda-lime glass for mass production is melted in gas fired units. Smaller scale furnaces for specialty glasses include electric melters, pot furnaces, and day tanks. After melting, homogenization and refining (removal of bubbles), the glass is . Flat glass for windows and similar applications is formed by the float glass process, developed between 1953 and 1957 by Sir Alastair Pilkington and Kenneth Bickerstaff of the UK's Pilkington Brothers, who created a continuous ribbon of glass using a molten tin bath on which the molten glass flows unhindered under the influence of gravity. The top surface of the glass is subjected to nitrogen under pressure to obtain a polished finish. Container glass for common bottles and jars is formed by blowing and pressing methods. This glass is often slightly modified chemically (with more alumina and calcium oxide) for greater water resistance. Once the desired form is obtained, glass is usually annealed for the removal of stresses and to increase the glass's hardness and durability. Surface treatments, coatings or lamination may follow to improve the chemical durability (glass container coatings, glass container internal treatment), strength (toughened glass, bulletproof glass, windshields), or optical properties (insulated glazing, anti-reflective coating). New chemical glass compositions or new treatment techniques can be initially investigated in small-scale laboratory experiments. The raw materials for laboratory-scale glass melts are often different from those used in mass production because the cost factor has a low priority. In the laboratory mostly pure chemicals are used. Care must be taken that the raw materials have not reacted with moisture or other chemicals in the environment (such as alkali or alkaline earth metal oxides and hydroxides, or boron oxide), or that the impurities are quantified (loss on ignition). Evaporation losses during glass melting should be considered during the selection of the raw materials, e.g., sodium selenite may be preferred over easily evaporating selenium dioxide (SeO2). Also, more readily reacting raw materials may be preferred over relatively inert ones, such as aluminum hydroxide (Al(OH)3) over alumina (Al2O3). Usually, the melts are carried out in platinum crucibles to reduce contamination from the crucible material. Glass homogeneity is achieved by homogenizing the raw materials mixture (glass batch), by stirring the melt, and by crushing and re-melting the first melt. The obtained glass is usually annealed to prevent breakage during processing. Colour in glass may be obtained by addition of homogenously distributed electrically charged ions (or colour centres). While ordinary soda-lime glass appears colourless in thin section, iron(II) oxide (FeO) impurities produce a green tint in thick sections. Manganese dioxide (MnO2), which gives glass a purple colour, may be added to remove the green tint given by FeO. FeO and chromium(III) oxide (Cr2O3) additives are used in the production of green bottles. Iron (III) oxide, on the other-hand, produces yellow or yellow-brown glass. Low concentrations (0.025 to 0.1%) of cobalt oxide (CoO) produces rich, deep blue cobalt glass. Chromium is a very powerful colourising agent, yielding dark green. Sulphur combined with carbon and iron salts produces amber glass ranging from yellowish to almost black. A glass melt can also acquire an amber colour from a reducing combustion atmosphere. Cadmium sulfide produces imperial red, and combined with selenium can produce shades of yellow, orange, and red. The additive Copper(II) oxide (CuO) produces a turquoise colour in glass, in contrast to Copper(I) oxide (Cu2O) which gives a dull brown-red colour. Soda-lime sheet glass is typically used as transparent glazing material, typically as windows in external walls of buildings. Float or rolled sheet glass products is cut to size either by scoring and snapping the material, laser cutting, water jets, or diamond bladed saw. The glass may be thermally or chemically tempered (strengthened) for safety and bent or curved during heating. Surface coatings may be added for specific functions such as scratch resistance, blocking specific wavelengths of light (e.g. infrared or ultraviolet), dirt-repellence (e.g. self-cleaning glass), or switchable electrochromic coatings. Structural glazing systems represent one of the most significant architectural innovations of modern times, where glass buildings now often dominate skylines of many modern cities. These systems use stainless steel fittings countersunk into recesses in the corners of the glass panels allowing strengthened panes to appear unsupported creating a flush exterior. Structural glazing systems have their roots in iron and glass conservatories of the nineteenth century Glass is an essential component of tableware and is typically used for water, beer and wine drinking glasses. Wine glasses are typically stemware, i.e. goblets formed from a bowl, stem, and foot. Crystal or Lead crystal glass may be cut and polished to produce decorative drinking glasses with gleaming facets. Other uses of glass in tableware include decanters, jugs, plates, and bowls. Glass is an important material in scientific laboratories for the manufacture of experimental apparatus because it is relatively cheap, readily formed into required shapes for experiment, easy to keep clean, can withstand heat and cold treatment, is generally non-reactive with many reagents, and its transparency allows for the observation of chemical reactions and processes. Laboratory glassware applications include flasks, petri dishes, test tubes, pipettes, graduated cylinders, glass lined metallic containers for chemical processing, fractionation columns, glass pipes, Schlenk lines, gauges, and thermometers. Although most standard laboratory glassware has been mass-produced since the 1920s, scientists still employ skilled glassblowers to manufacture bespoke glass apparatus for their experimental requirements. Glass is a ubiquitous material in optics by virtue of its ability to refract, reflect, and transmit light. These and other optical properties can be controlled by varying chemical compositions, thermal treatment, and manufacturing techniques. The many applications of glass in optics includes glasses for eyesight correction, imaging optics (e.g. lenses and mirrors in telescopes, microscopes, and cameras), fibre optics in telecommunications technology, and integrated optics. Microlenses and gradient-index optics (where the refractive index is non-uniform) find application in e.g. reading optical discs, laser printers, photocopiers, and laser diodes. The 19th century saw a revival in ancient glass-making techniques including cameo glass, achieved for the first time since the Roman Empire, initially mostly for pieces in a neo-classical style. The Art Nouveau movement made great use of glass, with René Lalique, Émile Gallé, and Daum of Nancy in the first French wave of the movement, producing coloured vases and similar pieces, often in cameo glass or in luster techniques. Louis Comfort Tiffany in America specialized in stained glass, both secular and religious, in panels and his famous lamps. The early 20th-century saw the large-scale factory production of glass art by firms such as Waterford and Lalique. Small studios may hand-produce glass artworks. Techniques for producing glass art include blowing, kiln-casting, fusing, slumping, pâte de verre, flame-working, hot-sculpting and cold-working. Cold work includes traditional stained glass work and other methods of shaping glass at room temperature. Objects made out of glass include vessels, paperweights, marbles, beads, sculptures and installation art.
https://en.wikipedia.org/wiki?curid=12581
Gel electrophoresis Gel electrophoresis is a method for separation and analysis of macromolecules (DNA, RNA and proteins) and their fragments, based on their size and charge. It is used in clinical chemistry to separate proteins by charge or size (IEF agarose, essentially size independent) and in biochemistry and molecular biology to separate a mixed population of DNA and RNA fragments by length, to estimate the size of DNA and RNA fragments or to separate proteins by charge. Nucleic acid molecules are separated by applying an electric current to move the negatively charged molecules through a matrix of agarose or other substances. Shorter molecules move faster and migrate farther than longer ones because shorter molecules migrate more easily through the pores of the gel. This phenomenon is called sieving. Proteins are separated by the charge in agarose because the pores of the gel are too small to sieve proteins. Gel electrophoresis can also be used for the separation of nanoparticles. Gel electrophoresis uses a gel as an anticonvective medium or sieving medium during electrophoresis, the movement of a charged particle in an electrical current. Gels suppress the thermal convection caused by the application of the electric current, and can also act as a sieving medium, retarding the passage of molecules; gels can also simply serve to maintain the finished separation so that a post electrophoresis stain can be applied. DNA Gel electrophoresis is usually performed for analytical purposes, often after amplification of DNA via polymerase chain reaction (PCR), but may be used as a preparative technique prior to use of other methods such as mass spectrometry, RFLP, PCR, cloning, DNA sequencing, or Southern blotting for further characterization. Electrophoresis is a process which enables the sorting of molecules based on size. Using an electric field, molecules (such as DNA) can be made to move through a gel made of agarose or polyacrylamide. The electric field consists of a negative charge at one end which pushes the molecules through the gel, and a positive charge at the other end that pulls the molecules through the gel. The molecules being sorted are dispensed into a well in the gel material. The gel is placed in an electrophoresis chamber, which is then connected to a power source. When the electric current is applied, the larger molecules move more slowly through the gel while the smaller molecules move faster. The different sized molecules form distinct bands on the gel. The term "gel" in this instance refers to the matrix used to contain, then separate the target molecules. In most cases, the gel is a crosslinked polymer whose composition and porosity are chosen based on the specific weight and composition of the target to be analyzed. When separating proteins or small nucleic acids (DNA, RNA, or oligonucleotides) the gel is usually composed of different concentrations of acrylamide and a cross-linker, producing different sized mesh networks of polyacrylamide. When separating larger nucleic acids (greater than a few hundred bases), the preferred matrix is purified agarose. In both cases, the gel forms a solid, yet porous matrix. Acrylamide, in contrast to polyacrylamide, is a neurotoxin and must be handled using appropriate safety precautions to avoid poisoning. Agarose is composed of long unbranched chains of uncharged carbohydrate without cross-links resulting in a gel with large pores allowing for the separation of macromolecules and macromolecular complexes. Electrophoresis refers to the electromotive force (EMF) that is used to move the molecules through the gel matrix. By placing the molecules in wells in the gel and applying an electric field, the molecules will move through the matrix at different rates, determined largely by their mass when the charge-to-mass ratio (Z) of all species is uniform. However, when charges are not all uniform the electrical field generated by the electrophoresis procedure will cause the molecules to migrate differentially according to charge. Species that are net positively charged will migrate towards the cathode which is negatively charged (because this is an electrolytic rather than galvanic cell), whereas species that are net negatively charged will migrate towards the positively charged anode. Mass remains a factor in the speed with which these non-uniformly charged molecules migrate through the matrix toward their respective electrodes. If several samples have been loaded into adjacent wells in the gel, they will run parallel in individual lanes. Depending on the number of different molecules, each lane shows the separation of the components from the original mixture as one or more distinct bands, one band per component. Incomplete separation of the components can lead to overlapping bands, or indistinguishable smears representing multiple unresolved components. Bands in different lanes that end up at the same distance from the top contain molecules that passed through the gel at the same speed, which usually means they are approximately the same size. There are molecular weight size markers available that contain a mixture of molecules of known sizes. If such a marker was run on one lane in the gel parallel to the unknown samples, the bands observed can be compared to those of the unknown to determine their size. The distance a band travels is approximately inversely proportional to the logarithm of the size of the molecule. There are limits to electrophoretic techniques. Since passing a current through a gel causes heating, gels may melt during electrophoresis. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. There are also limitations in determining the molecular weight by SDS-PAGE, especially when trying to find the MW of an unknown protein. Certain biological variables are difficult or impossible to minimize and can affect the electrophoretic migration. Such factors include protein structure, post-translational modifications, and amino acid composition. For example, tropomyosin is an acidic protein that migrates abnormally on SDS-PAGE gels. This is because the acidic residues are repelled by the negatively charged SDS, leading to an inaccurate mass-to-charge ratio and migration. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons. The types of gel most typically used are agarose and polyacrylamide gels. Each type of gel is well-suited to different types and sizes of the analyte. Polyacrylamide gels are usually used for proteins and have very high resolving power for small fragments of DNA (5-500 bp). Agarose gels, on the other hand, have lower resolving power for DNA but have a greater range of separation, and are therefore used for DNA fragments of usually 50-20,000 bp in size, but the resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). Polyacrylamide gels are run in a vertical configuration while agarose gels are typically run horizontally in a submarine mode. They also differ in their casting methodology, as agarose sets thermally, while polyacrylamide forms in a chemical polymerization reaction. Agarose gels are made from the natural polysaccharide polymers extracted from seaweed. Agarose gels are easily cast and handled compared to other matrices because the gel setting is a physical rather than chemical change. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator. Agarose gels do not have a uniform pore size, but are optimal for electrophoresis of proteins that are larger than 200 kDa. Agarose gel electrophoresis can also be used for the separation of DNA fragments ranging from 50 base pair to several megabases (millions of bases), the largest of which require specialized apparatus. The distance between DNA bands of different lengths is influenced by the percent agarose in the gel, with higher percentages requiring longer run times, sometimes days. Instead high percentage agarose gels should be run with a pulsed field electrophoresis (PFE), or field inversion electrophoresis. "Most agarose gels are made with between 0.7% (good separation or resolution of large 5–10kb DNA fragments) and 2% (good resolution for small 0.2–1kb fragments) agarose dissolved in electrophoresis buffer. Up to 3% can be used for separating very tiny fragments but a vertical polyacrylamide gel is more appropriate in this case. Low percentage gels are very weak and may break when you try to lift them. High percentage gels are often brittle and do not set evenly. 1% gels are common for many applications." Polyacrylamide gel electrophoresis (PAGE) is used for separating proteins ranging in size from 5 to 2,000 kDa due to the uniform pore size provided by the polyacrylamide gel. Pore size is controlled by modulating the concentrations of acrylamide and bis-acrylamide powder used in creating a gel. Care must be used when creating this type of gel, as acrylamide is a potent neurotoxin in its liquid and powdered forms. Traditional DNA sequencing techniques such as Maxam-Gilbert or Sanger methods used polyacrylamide gels to separate DNA fragments differing by a single base-pair in length so the sequence could be read. Most modern DNA separation methods now use agarose gels, except for particularly small DNA fragments. It is currently most often used in the field of immunology and protein analysis, often used to separate different proteins or isoforms of the same protein into separate bands. These can be transferred onto a nitrocellulose or PVDF membrane to be probed with antibodies and corresponding markers, such as in a western blot. Typically resolving gels are made in 6%, 8%, 10%, 12% or 15%. Stacking gel (5%) is poured on top of the resolving gel and a gel comb (which forms the wells and defines the lanes where proteins, sample buffer, and ladders will be placed) is inserted. The percentage chosen depends on the size of the protein that one wishes to identify or probe in the sample. The smaller the known weight, the higher the percentage that should be used. Changes on the buffer system of the gel can help to further resolve proteins of very small sizes. Partially hydrolysed potato starch makes for another non-toxic medium for protein electrophoresis. The gels are slightly more opaque than acrylamide or agarose. Non-denatured proteins can be separated according to charge and size. They are visualised using Napthal Black or Amido Black staining. Typical starch gel concentrations are 5% to 10%. Denaturing gels are run under conditions that disrupt the natural structure of the analyte, causing it to unfold into a linear chain. Thus, the mobility of each macromolecule depends only on its linear length and its mass-to-charge ratio. Thus, the secondary, tertiary, and quaternary levels of biomolecular structure are disrupted, leaving only the primary structure to be analyzed. Nucleic acids are often denatured by including urea in the buffer, while proteins are denatured using sodium dodecyl sulfate, usually as part of the SDS-PAGE process. For full denaturation of proteins, it is also necessary to reduce the covalent disulfide bonds that stabilize their tertiary and quaternary structure, a method called reducing PAGE. Reducing conditions are usually maintained by the addition of beta-mercaptoethanol or dithiothreitol. For a general analysis of protein samples, reducing PAGE is the most common form of protein electrophoresis. Denaturing conditions are necessary for proper estimation of molecular weight of RNA. RNA is able to form more intramolecular interactions than DNA which may result in change of its electrophoretic mobility. Urea, DMSO and glyoxal are the most often used denaturing agents to disrupt RNA structure. Originally, highly toxic methylmercury hydroxide was often used in denaturing RNA electrophoresis, but it may be method of choice for some samples. Denaturing gel electrophoresis is used in the DNA and RNA banding pattern-based methods temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE). Native gels are run in non-denaturing conditions so that the analyte's natural structure is maintained. This allows the physical size of the folded or assembled complex to affect the mobility, allowing for analysis of all four levels of the biomolecular structure. For biological samples, detergents are used only to the extent that they are necessary to lyse lipid membranes in the cell. Complexes remain—for the most part—associated and folded as they would be in the cell. One downside, however, is that complexes may not separate cleanly or predictably, as it is difficult to predict how the molecule's shape and size will affect its mobility. Addressing and solving this problem is a major aim of quantitative native PAGE. Unlike denaturing methods, native gel electrophoresis does not use a charged denaturing agent. The molecules being separated (usually proteins or nucleic acids) therefore differ not only in molecular mass and intrinsic charge, but also the cross-sectional area, and thus experience different electrophoretic forces dependent on the shape of the overall structure. For proteins, since they remain in the native state they may be visualized not only by general protein staining reagents but also by specific enzyme-linked staining. A specific experiment example of an application of native gel electrophoresis is to check for enzymatic activity to verify the presence of the enzyme in the sample during protein purification. For example, for the protein alkaline phosphatase, the staining solution is a mixture of 4-chloro-2-2methylbenzenediazonium salt with 3-phospho-2-naphthoic acid-2’-4’-dimethyl aniline in Tris buffer. This stain is commercially sold as a kit for staining gels. If the protein is present, the mechanism of the reaction takes place in the following order: it starts with the de-phosphorylation of 3-phospho-2-naphthoic acid-2’-4’-dimethyl aniline by alkaline phosphatase (water is needed for the reaction). The phosphate group is released and replaced by an alcohol group from water. The electrophile 4- chloro-2-2 methylbenzenediazonium (Fast Red TR Diazonium salt) displaces the alcohol group forming the final product Red Azo dye. As its name implies, this is the final visible-red product of the reaction. In undergraduate academic experimentation of protein purification, the gel is usually run next to commercial purified samples to visualize the results and conclude whether or not purification was successful. Native gel electrophoresis is typically used in proteomics and metallomics. However, native PAGE is also used to scan genes (DNA) for unknown mutations as in Single-strand conformation polymorphism. Buffers in gel electrophoresis are used to provide ions that carry a current and to maintain the pH at a relatively constant value. These buffers have plenty of ions in them, which is necessary for the passage of electricity through them. Something like distilled water or benzene contains few ions, which is not ideal for the use in electrophoresis. There are a number of buffers used for electrophoresis. The most common being, for nucleic acids Tris/Acetate/EDTA (TAE), Tris/Borate/EDTA (TBE). Many other buffers have been proposed, e.g. lithium borate, which is rarely used, based on Pubmed citations (LB), isoelectric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) matched ion mobilities, which leads to longer buffer life. Borate is problematic; Borate can polymerize, or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity but provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM Lithium borate). Most SDS-PAGE protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus on a single sharp band in a process called isotachophoresis. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins. After the electrophoresis is complete, the molecules in the gel can be stained to make them visible. DNA may be visualized using ethidium bromide which, when intercalated into DNA, fluoresce under ultraviolet light, while protein may be visualised using silver stain or Coomassie Brilliant Blue dye. Other methods may also be used to visualize the separation of the mixture's components on the gel. If the molecules to be separated contain radioactivity, for example in a DNA sequencing gel, an autoradiogram can be recorded of the gel. Photographs can be taken of gels, often using a Gel Doc system. After separation, an additional separation method may then be used, such as isoelectric focusing or SDS-PAGE. The gel will then be physically cut, and the protein complexes extracted from each portion separately. Each extract may then be analysed, such as by peptide mass fingerprinting or de novo peptide sequencing after in-gel digestion. This can provide a great deal of information about the identities of the proteins in a complex. Gel electrophoresis is used in forensics, molecular biology, genetics, microbiology and biochemistry. The results can be analyzed quantitatively by visualizing the gel with UV light and a gel imaging device. The image is recorded with a computer operated camera, and the intensity of the band or spot of interest is measured and compared against standard or markers loaded on the same gel. The measurement and analysis are mostly done with specialized software. Depending on the type of analysis being performed, other techniques are often implemented in conjunction with the results of gel electrophoresis, providing a wide range of field-specific applications. In the case of nucleic acids, the direction of migration, from negative to positive electrodes, is due to the naturally occurring negative charge carried by their sugar-phosphate backbone. Double-stranded DNA fragments naturally behave as long rods, so their migration through the gel is relative to their size or, for cyclic fragments, they're radius of gyration. Circular DNA such as plasmids, however, may show multiple bands, the speed of migration may depend on whether it is relaxed or supercoiled. Single-stranded DNA or RNA tends to fold up into molecules with complex shapes and migrate through the gel in a complicated manner based on their tertiary structure. Therefore, agents that disrupt the hydrogen bonds, such as sodium hydroxide or formamide, are used to denature the nucleic acids and cause them to behave as long rods again. Gel electrophoresis of large DNA or RNA is usually done by agarose gel electrophoresis. See the "Chain termination method" page for an example of a polyacrylamide DNA sequencing gel. Characterization through ligand interaction of nucleic acids or fragments may be performed by mobility shift affinity electrophoresis. Electrophoresis of RNA samples can be used to check for genomic DNA contamination and also for RNA degradation. RNA from eukaryotic organisms shows distinct bands of 28s and 18s rRNA, the 28s band being approximately twice as intense as the 18s band. Degraded RNA has less sharply defined bands, has a smeared appearance, and intensity ratio is less than 2:1. Proteins, unlike nucleic acids, can have varying charges and complex shapes, therefore they may not migrate into the polyacrylamide gel at similar rates, or all when placing a negative to positive EMF on the sample. Proteins therefore, are usually denatured in the presence of a detergent such as sodium dodecyl sulfate (SDS) that coats the proteins with a negative charge. Generally, the amount of SDS bound is relative to the size of the protein (usually 1.4g SDS per gram of protein), so that the resulting denatured proteins have an overall negative charge, and all the proteins have a similar charge-to-mass ratio. Since denatured proteins act like long rods instead of having a complex tertiary shape, the rate at which the resulting SDS coated proteins migrate in the gel is relative only to its size and not its charge or shape. Proteins are usually analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), by native gel electrophoresis, by preparative gel electrophoresis (QPNC-PAGE), or by 2-D electrophoresis. Characterization through ligand interaction may be performed by electroblotting or by affinity electrophoresis in agarose or by capillary electrophoresis as for estimation of binding constants and determination of structural features like glycan content through lectin binding. A novel application for the gel electrophoresis is to separate or characterize metal or metal oxide nanoparticles (e.g. Au, Ag, ZnO, SiO2) regarding the size, shape, or surface chemistry of the nanoparticles. The scope is to obtain a more homogeneous sample (e.g. narrower particle size distribution), which than can be used in further products/processes (e.g. self-assembly processes). For the separation of nanoparticles within a gel, the particle size about the mesh size is the key parameter, whereby two migration mechanisms where identified: the unrestricted mechanism, where the particle size « mesh size, and the restricted mechanism, where particle size is similar to mesh size. A 1959 book on electrophoresis by Milan Bier cites references from the 1800s. However, Oliver Smithies made significant contributions. Bier states: "The method of Smithies ... is finding wide application because of its unique separatory power." Taken in context, Bier clearly implies that Smithies' method is an improvement.
https://en.wikipedia.org/wiki?curid=12582
Gary Lineker Gary Winston Lineker (; born 30 November 1960) is an English former professional footballer and current sports broadcaster. He is regarded as one of the greatest English strikers. Lineker's media career began with the BBC, where he has presented the flagship football programme "Match of the Day" since the late 1990s. He has also worked for Al Jazeera Sports, Eredivisie Live, NBC Sports Network and currently hosts BT Sport's coverage of the UEFA Champions League. Lineker began his football career at Leicester City in 1978, and finished as the First Division's joint top goalscorer in 1984–85. He then moved to league champions Everton where he won both the PFA Players' Player of the Year and FWA Footballer of the Year awards in his debut season, before moving to Spanish giants Barcelona. With Barcelona, he won the 1987-1988 Copa del Rey and the 1989 European Cup Winners' Cup. He joined Tottenham Hotspur in 1989, and won his second FWA Footballer of the Year and won the FA Cup, his first and only major trophy in English football. Lineker's final club was Nagoya Grampus Eight; he retired in 1994 after two seasons at the Japanese side. Lineker made his England debut in 1984, earning 80 caps and scoring 48 goals over an eight-year international career, and is England's third highest scorer, behind Bobby Charlton and Wayne Rooney. His international goals-to-games ratio remains one of the best for the country, and he holds England's record for goals in the FIFA World Cup, with 10 scored. He was top scorer in the 1986 FIFA World Cup and received the Golden Boot, the only time an Englishman had done so until Harry Kane in the 2018 World Cup. Lineker was integral in England's semi-final feat at the 1990 World Cup, where he scored four goals. Lineker is also the only player to have been the top scorer in England with three clubs: Leicester City, Everton and Tottenham Hotspur. Notably, he never received a yellow or red card during his 16-year career. As a result, he was honoured in 1990 with the FIFA Fair Play Award. In a senior career which spanned 16 years and 567 competitive games, Lineker scored a total of 330 goals, including 282 goals at club level. After his retirement from football he was inducted into the English Football Hall of Fame. A keen supporter of Leicester City, he led a consortium that invested in his old club, saving it from bankruptcy, and was appointed honorary vice-president. Lineker was born in Leicester, the son of Margaret P. (Abbs) and Barry Lineker. His middle name came from Winston Churchill, with whom he shares his birthday. He has one brother, Wayne, who is two years his junior. Lineker grew up with his family in the city, playing football with his brother Wayne. Lineker's father was a greengrocer, as was his grandfather William and great-grandfather, George, in Leicester. His father ran Lineker's fruit and veg stall in Leicester Market and as a child and a young player he regularly helped out on the stall. Lineker first attended Caldecote Road School (Caldecote Juniors), Braunstone in Leicester (east of the Meridian Centre). He went to the City of Leicester Boys' Grammar School (now City of Leicester College) on Downing Drive in Evington, due to his preference for football rather than rugby, which was the main sport of most schools near his home. Lineker was equally talented at both football and cricket. From the ages of 11 to 16 he captained the Leicestershire Schools cricket team, and had felt that he had a higher chance of succeeding at it rather than football. He later stated on "They Think It's All Over" that as a teenager he idolised former England captain David Gower, who was playing for Leicestershire at the time. During his youth he played for Aylestone Park Youth, later becoming the club's president. Lineker left school with four O Levels. One of his teachers wrote on his report card that he "concentrates too much on football" and that he would "never make a living at that". He then joined the youth academy at Leicester City in 1976. Lineker began his career at his home town club Leicester City after leaving school in 1977, turning professional in the 1978–79 season and making his senior debut on New Year's Day 1979 in a 2–0 win over Oldham Athletic in the Second Division at Filbert Street. He earned a Second Division title medal a year later with 19 appearances, but played just nine league games in 1980–81 as Leicester went straight back down. However, he became a regular player in 1981–82, scoring 19 goals in all competitions that season. Although Leicester missed out on promotion, they reached the semi-finals of the FA Cup, and clinched promotion a year later as Lineker scored 26 times in the Second Division. In 1983–84, he enjoyed regular First Division action for the first time and was the division's second highest scorer with 22 goals, although Leicester failed to finish anywhere near the top of the league. He was the First Division's joint top scorer in 1984–85 with 24 goals, and was enjoying a prolific partnership with Alan Smith. However, by this stage, he was attracting the attention of bigger clubs, and a move from Filbert Street was looking certain. In the 1985 close season, defending league champions Everton signed Lineker for £800,000; he scored 40 goals in 57 games for his new team in the 1985–86 season. Lineker's first game for Everton happened to be away to Leicester City; at half time, he walked into the Leicester dressing room by mistake. He was again the First Division's leading goal scorer, this time with 30 goals (including three hat-tricks), and helped Everton finish second in the league. While at Everton, they reached the FA Cup final for the third consecutive year but lost 3–1 to Liverpool, despite Lineker giving them an early lead when he outpaced Alan Hansen to score. Liverpool had also pipped Everton to the title by just two points. "I was only on Merseyside a short time, nine or 10 months in total really, but it was still a happy time personally, while professionally it was one of the most successful periods of my career," he says. "I still have an affinity towards Everton." Lineker scored three hat-tricks for Everton; at home to Birmingham City in a 4–1 league win on 31 August 1985, at home to Manchester City in a 4–0 home win on 11 February 1986, and then in the penultimate league game of the season on 3 May 1986, when they kept their title hopes alive with a 6–1 home win over Southampton. On his final league appearance, he scored twice in a 3–1 home win over West Ham United whose own title hopes had just disappeared. However, he and his colleagues were denied title glory as Liverpool also won their final league game of the season at Chelsea. Lineker has consistently stated since retiring from football that this Everton team was the best club side he ever played in. After winning the Golden Boot at the 1986 World Cup in Mexico, Lineker was signed by Barcelona for £2.8 million. Barcelona were being managed by former Queens Park Rangers manager Terry Venables, who had also brought in Manchester United and Wales striker Mark Hughes. Barcelona gave Lineker his first chance of European football, as Leicester had never qualified for Europe while he played for them, and Everton were denied a place in the European Cup for 1985–86 due to the commencement of the ban on English clubs in European competitions following the Heysel disaster. Lineker made his Barcelona debut against Racing Santander, scoring twice. His Golden Boot-winning performance at the finals generated much anticipation of success at the Camp Nou, and he did not disappoint, scoring 21 goals in 41 games during his first season, including a hat-trick in a 3–2 win over archrivals Real Madrid. Barcelona went on to win the Copa del Rey in 1988 and the European Cup Winners' Cup in 1989. Lineker played in Barcelona's shock home and away defeats to Dundee United. Barcelona manager Johan Cruyff decided to play Lineker on the right of the midfield and he was eventually no longer an automatic choice in the team. With 42 goals in 103 La Liga appearances, Lineker became the highest scoring British player in the competition's history, but was later surpassed by Gareth Bale in March 2016. Manchester United manager Alex Ferguson attempted to sign Lineker to partner his ex-Barcelona teammate Mark Hughes in attack, but Lineker instead signed for Tottenham Hotspur in July 1989 for £1.1 million. Over three seasons, he scored 67 goals in 105 league games and won the FA Cup while playing for the club. He finished as top scorer in the First Division in the 1989–90 season, scoring 24 goals as Spurs finished third. He finally collected an English trophy when he won the 1991 FA Cup Final with Spurs, who beat Nottingham Forest 2–1. This was despite Lineker having a goal controversially disallowed for offside and also having a penalty saved by goalkeeper Mark Crossley. Lineker had contributed to Tottenham's run to the final. In the semi-final he scored twice in a 3–1 win over North London rivals Arsenal. He was the top division's second-highest goalscorer in 1991–92 with 28 goals from 35 games, behind Ian Wright, who scored 29 times in 42 games. Despite Lineker's personal performance, Tottenham finished this final pre-Premier League season in 15th place. His last goal in English football came on the last day of the season in a 3–1 defeat to Manchester United at Old Trafford. In November 1991, Lineker accepted an offer of a two-year contract from J1 League club Nagoya Grampus Eight. The transfer fee paid to Tottenham Hotspur was £2 million. He officially joined Nagoya Grampus Eight after playing his final game for Spurs on 2 May 1992, when he scored the consolation goal in a 3–1 defeat by Manchester United on the last day of the season. Shortly before accepting the offer from Nagoya Grampus Eight, Tottenham had rejected an offer from ambitious Second Division club Blackburn Rovers, who had recently been taken over by steel baron Jack Walker. Having scored 9 goals in 23 appearances over two injury impacted seasons for Nagoya Grampus Eight, he announced his retirement from playing in September 1994. The English national media had previously reported that he would be returning to England to complete his playing career at Middlesbrough or Southampton. Lineker was capped once by the England B national team, playing in a 2–0 home win over New Zealand's B team on 13 November 1984. He first played for the full England team against Scotland in 1984. He played five games in the 1986 World Cup and was top scorer of the tournament with six goals, winning the Golden Boot, making him the first English player to have done so. He scored the second quickest hat-trick ever at a FIFA World Cup tournament against Poland, the second English player to score a hat-trick at a World Cup, and scored two goals against Paraguay in the second round. He played most of the tournament wearing a lightweight cast on his forearm. He scored for England in the World Cup quarter-final against Argentina, but the game ended in defeat as Diego Maradona scored twice for the opposition (the first goal being the "Hand of God" handball, and the second being the "Goal of the Century"). In 1988, Lineker played in Euro 88, but failed to score as England lost all three Group games. It was later established that he had been suffering from hepatitis. In the 1990 World Cup, he scored four goals to help England reach the semi-finals after a string of draws and narrow victories. He was unwell during the tournament, and accidentally defecated during the opening group game against the Republic of Ireland. After Andreas Brehme sent England 1–0 down in the semi-final, Lineker received a pass from Paul Parker and escaped two West German defenders on his way to scoring the equaliser, but the West Germans triumphed in the penalty shoot-out and went on to win the trophy. Later he said: "Football is a simple game; 22 men chase a ball for 90 minutes and at the end, the Germans win." Lineker's equaliser appears in the popular England national team anthem, "Three Lions", with the lyric "When Lineker scored". He retired from international football with eighty caps and forty-eight goals, one fewer goal than Sir Bobby Charlton's England record (which Charlton accrued over 106 caps). In what proved to be his last England match, against Sweden at Euro 92, he was substituted by England coach Graham Taylor in favour of Arsenal striker Alan Smith, ultimately denying him the chance to equal—or even better—Charlton's record. He had earlier missed a penalty that would have brought him level, in a pre-tournament friendly against Brazil. He was visibly upset at the decision, not looking at Taylor as he took the bench. He scored four goals in an England match on two occasions and is one of very few players never to have been given a yellow card or a red card in any type of game. Following retirement from professional football, he developed a career in the media, initially on BBC Radio 5 Live and as a football pundit before replacing Des Lynam as the BBC's anchorman for football coverage, including their flagship football television programme "Match of the Day", and as a team captain on the sports game show "They Think It's All Over" from 1995 to 2003. Following the departure of Steve Rider from the BBC, Lineker, who is a keen recreational golfer with a handicap of four, became the new presenter for the BBC's golf coverage. Also, he presented "Grandstand" in the London studio while then-presenter Desmond Lynam was in Aintree when the Grand National was abandoned because of a bomb alert at the racecourse in 1997. Despite receiving some criticism from his peers, he continued to front the BBC's coverage of the Masters and The Open, where he put his language skills to good use by giving an impromptu interview in Spanish with Argentinian Andrés Romero. He also appeared in the 1991 play "An Evening with Gary Lineker" by Arthur Smith and Chris England, which was adapted for television in 1994. He presented a six-part TV series for the BBC in 1998 (directed by Lloyd Stanton) called "Golden Boots", with other football celebrities. It was an extensive history of the World Cup focusing on the 'Golden Boots' (top scorers). In 2001, Lineker appeared in the TV show "Brass Eye" (episode "Paedogeddon"). In 2002, Lineker had a cameo appearance in the film "Bend It Like Beckham". In 2005, Lineker was sued for defamation by Australian footballer Harry Kewell over comments Lineker had made writing in his column in "The Sunday Telegraph" about Kewell's transfer from Leeds United to Liverpool. However, the jury was unable to reach a verdict. It became known during the case that the article had actually been ghost-written by a journalist at the "Sunday Telegraph" following a telephone interview with Lineker. In 2006, Lineker took on an acting role as the voice of "Underground Ernie" on the BBC's children's channel, CBeebies. In December 2008, Lineker appeared on the ITV1 television programme "Who Wants to Be a Millionaire?" where he and English rugby union player Austin Healey won £50,000 for the Nicholls Spinal Injury Foundation. In 2009, Lineker and his wife Danielle hosted a series of the BBC's "Northern Exposure", following on from Laurence Llewelyn-Bowen from the previous year in visiting and showcasing locations throughout Northern Ireland. In May 2010, Lineker resigned from his role as columnist for "The Mail on Sunday" in protest over the sting operation against Lord Triesman that reportedly jeopardised England's bid to host the 2018 World Cup. Triesman resigned as chairman of the bid and the FA on 16 May 2010 after the publication of a secret recording of a conversation between the peer and a former ministerial aide, during which he claimed that Spain and Russia were planning to bribe referees at the World Cup in South Africa. Lineker then began working as an anchor for the English language football coverage for Al Jazeera Sport, which is broadcast throughout most of the Middle East. He left the Qatar-based network in 2012. In 2013, Lineker began working for NBCSN as part of their Premier League coverage, and contributing to the US version of "Match of the Day". On 9 June 2015, Lineker was unveiled as the lead presenter of BT Sport's Champions League coverage. On 13 August 2016, Lineker presented the first "Match of the Day" of the 2016–17 season wearing only boxer shorts. He had promised in a tweet from December 2015 that, if Leicester City won the Premier League, he would "present Match of the Day in just my undies". On 18 October 2016, Lineker tweeted a rebuttal to a statement made by MP David Davies where Davies suggested refugees entering the UK should undergo dental checks to verify their age. Lineker posted "The treatment by some towards these young refugees is hideously racist and utterly heartless. What's happening to our country?" This led "The Sun" to call for Lineker's sacking from "Match of the Day", claiming that he had breached BBC impartiality guidelines. Lineker described the controversy as "a spanking" but continued to advocate for refugees. In July 2018, Lineker announced his support for People's Vote, a campaign group calling for a public vote on the final Brexit deal between the UK and the European Union. Lineker has appeared in a number of adverts for the Leicester-based snack company Walkers. Originally signing a £200,000 deal in 1994, his first advert was 1995's "Welcome Home" (Lineker had recently returned to England having played in Japan). Walkers temporarily named their salt and vinegar crisps after Lineker, labelling them 'Salt & Lineker', in the late 1990s. In 2000, Lineker's Walkers commercials were ranked ninth in Channel 4's poll of the "100 Greatest Adverts". In May 2014, Lineker established his own production company Goalhanger Films Ltd. with former ITV Controller Tony Pastor. During the 2014 FIFA World Cup, Lineker presented several short videos produced by Goalhanger Films on YouTube with the title "Blahzil". In May 2015, the company produced a 60-minute-long documentary presented by Lineker titled "Gary Lineker on the Road to FA Cup Glory" for the BBC. Lineker married Michelle Cockayne in 1986. In May 2006, Michelle filed for divorce on the grounds of Gary's alleged "unreasonable behaviour," with documents submitted to the court claiming that Lineker's actions in their marriage had caused her "stress and anxiety." Lineker and Michelle have four sons; George, Harry, Tobias and Angus. The couple subsequently stated that the situation was amicable. In the early 1990s, George, Lineker's eldest son, survived a rare form of leukaemia whilst he was a baby, treated at Great Ormond Street Hospital in London. Lineker now supports children's cancer charity CLIC Sargent and has appeared in promotional clips encouraging people to give blood. Lineker has been actively involved with other cancer charities such as Leukaemia Busters, where between 1994 and 2005 Gary and Michelle were the charity's patrons. He has also been involved with the Fight for Life and Cancer Research UK charities. Lineker was made a freeman of the City of Leicester in 1995 and he has been referred to as "Leicester's favourite son". In October 2002, Lineker backed a £5 million bid to rescue his former club Leicester City, which had recently gone into administration, describing his involvement as "charity" rather than an "ego trip." He stated that he would invest a six-figure sum and that other members of his consortium would invest a similar amount. Lineker met with fans' groups to persuade them to try and raise money to rescue his former club. The club was eventually saved from liquidation. Lineker is now honorary Vice President of Leicester City F.C. Lineker married Danielle Bux on 2 September 2009, in Ravello, Italy. They went on to win £30,000 for charity on ITV's gameshow "Mr and Mrs". On 13 January 2016, Lineker and Bux announced they were divorcing, after six years of marriage, the reason given being Gary not wanting more children. In 1985, Lineker was best man at snooker player Willie Thorne's wedding and their close friendship was the subject of the VHS production, "Best of Friends – The Official Story of Gary Lineker & Willie Thorne". In 2013, Lineker participated in the genealogical programme "Who Do You Think You Are?" during which he discovered an ancestor who was a poacher, and another who was a legal clerk. In November 2017, Lineker was named in the Paradise Papers in connection with a tax avoidance scheme relating to property owned in Barbados and a company set up in the British Virgin Islands. Lineker speaks Spanish, which he learnt during his time playing for FC Barcelona, and is an advocate for the teaching of foreign languages in school. In April 2020, during the COVID-19 pandemic, Lineker announced that he was donating £140,000 to the British Red Cross towards research into the virus. Leicester City Everton Barcelona Tottenham Hotspur Lineker is a Visiting Fellow of Lady Margaret Hall, University of Oxford, appointed 2020.
https://en.wikipedia.org/wiki?curid=12583
Golgi apparatus The Golgi apparatus, also known as the Golgi complex, Golgi body, or simply the Golgi, is an organelle found in most eukaryotic cells. Part of the endomembrane system in the cytoplasm, it packages proteins into membrane-bound vesicles inside the cell before the vesicles are sent to their destination. It resides at the intersection of the secretory, lysosomal, and endocytic pathways. It is of particular importance in processing proteins for secretion, containing a set of glycosylation enzymes that attach various sugar monomers to proteins as the proteins move through the apparatus. It was identified in 1897 by the Italian scientist Camillo Golgi and was named after him in 1898. Owing to its large size and distinctive structure, the Golgi apparatus was one of the first organelles to be discovered and observed in detail. It was discovered in 1898 by Italian physician Camillo Golgi during an investigation of the nervous system. After first observing it under his microscope, he termed the structure as "apparato reticolare interno" ("internal reticular apparatus"). Some doubted the discovery at first, arguing that the appearance of the structure was merely an optical illusion created by the observation technique used by Golgi. With the development of modern microscopes in the 20th century, the discovery was confirmed. Early references to the Golgi apparatus referred to it by various names including the "Golgi–Holmgren apparatus", "Golgi–Holmgren ducts", and "Golgi–Kopsch apparatus". The term "Golgi apparatus" was used in 1910 and first appeared in the scientific literature in 1913, while "Golgi complex" was introduced in 1956. The subcellular localization of the Golgi apparatus varies among eukaryotes. In mammals, a single Golgi apparatus is usually located near the cell nucleus, close to the centrosome. Tubular connections are responsible for linking the stacks together. Localization and tubular connections of the Golgi apparatus are dependent on microtubules. In experiments it is seen that as microtubules are depolymerized the Golgi apparatuses lose mutual connections and become individual stacks throughout the cytoplasm. In yeast, multiple Golgi apparatuses are scattered throughout the cytoplasm (as observed in "Saccharomyces cerevisiae"). In plants, Golgi stacks are not concentrated at the centrosomal region and do not form Golgi ribbons. Organization of the plant Golgi depends on actin cables and not microtubules. The common feature among Golgi is that they are adjacent to endoplasmic reticulum (ER) exit sites. In most eukaryotes, the Golgi apparatus is made up of a series of compartments and is a collection of fused, flattened membrane-enclosed disks known as cisternae (singular: "cisterna", also called "dictyosomes"), originating from vesicular clusters that bud off the endoplasmic reticulum. A mammalian cell typically contains 40 to 100 stacks of cisternae. Between four and eight cisternae are usually present in a stack; however, in some protists as many as sixty cisternae have been observed. This collection of cisternae is broken down into "cis", medial, and "trans" compartments, making up two main networks: the cis Golgi network (CGN) and the trans Golgi network (TGN). The CGN is the first cisternal structure, and the TGN is the final, from which proteins are packaged into vesicles destined to lysosomes, secretory vesicles, or the cell surface. The TGN is usually positioned adjacent to the stack, but can also be separate from it. The TGN may act as an early endosome in yeast and plants. There are structural and organizational differences in the Golgi apparatus among eukaryotes. In some yeasts, Golgi stacking is not observed. "Pichia pastoris" does have stacked Golgi, while "Saccharomyces cerevisiae" does not. In plants, the individual stacks of the Golgi apparatus seem to operate independently. The Golgi apparatus tends to be larger and more numerous in cells that synthesize and secrete large amounts of substances; for example, the antibody-secreting plasma B cells of the immune system have prominent Golgi complexes. In all eukaryotes, each cisternal stack has a "cis" entry face and a "trans" exit face. These faces are characterized by unique morphology and biochemistry. Within individual stacks are assortments of enzymes responsible for selectively modifying protein cargo. These modifications influence the fate of the protein. The compartmentalization of the Golgi apparatus is advantageous for separating enzymes, thereby maintaining consecutive and selective processing steps: enzymes catalyzing early modifications are gathered in the "cis" face cisternae, and enzymes catalyzing later modifications are found in "trans" face cisternae of the Golgi stacks. The Golgi apparatus is a major collection and dispatch station of protein products received from the endoplasmic reticulum (ER). Proteins synthesized in the ER are packaged into vesicles, which then fuse with the Golgi apparatus. These cargo proteins are modified and destined for secretion via exocytosis or for use in the cell. In this respect, the Golgi can be thought of as similar to a post office: it packages and labels items which it then sends to different parts of the cell or to the extracellular space. The Golgi apparatus is also involved in lipid transport and lysosome formation. The structure and function of the Golgi apparatus are intimately linked. Individual stacks have different assortments of enzymes, allowing for progressive processing of cargo proteins as they travel from the cisternae to the trans Golgi face. Enzymatic reactions within the Golgi stacks occur exclusively near its membrane surfaces, where enzymes are anchored. This feature is in contrast to the ER, which has soluble proteins and enzymes in its lumen. Much of the enzymatic processing is post-translational modification of proteins. For example, phosphorylation of oligosaccharides on lysosomal proteins occurs in the early CGN. "Cis" cisterna are associated with the removal of mannose residues. Removal of mannose residues and addition of N-acetylglucosamine occur in medial cisternae. Addition of galactose and sialic acid occurs in the "trans" cisternae. Sulfation of tyrosines and carbohydrates occurs within the TGN. Other general post-translational modifications of proteins include the addition of carbohydrates (glycosylation) and phosphates (phosphorylation). Protein modifications may form a signal sequence that determines the final destination of the protein. For example, the Golgi apparatus adds a mannose-6-phosphate label to proteins destined for lysosomes. Another important function of the Golgi apparatus is in the formation of proteoglycans. Enzymes in the Golgi append proteins to glycosaminoglycans, thus creating proteoglycans. Glycosaminoglycans are long unbranched polysaccharide molecules present in the extracellular matrix of animals. The vesicles that leave the rough endoplasmic reticulum are transported to the "cis" face of the Golgi apparatus, where they fuse with the Golgi membrane and empty their contents into the lumen. Once inside the lumen, the molecules are modified, then sorted for transport to their next destinations. Those proteins destined for areas of the cell other than either the endoplasmic reticulum or the Golgi apparatus are moved through the Golgi cisternae towards the "trans" face, to a complex network of membranes and associated vesicles known as the "trans-Golgi network" (TGN). This area of the Golgi is the point at which proteins are sorted and shipped to their intended destinations by their placement into one of at least three different types of vesicles, depending upon the signal sequence they carry. Though there are multiple models that attempt to explain vesicular traffic throughout the Golgi, no individual model can independently explain all observations of the Golgi apparatus. Currently, the cisternal progression/maturation model is the most accepted among scientists, accommodating many observations across eukaryotes. The other models are still important in framing questions and guiding future experimentation. Among the fundamental unanswered questions are the directionality of COPI vesicles and role of Rab GTPases in modulating protein cargo traffic. Brefeldin A (BFA) is a fungal metabolite used experimentally to disrupt the secretion pathway as a method of testing Golgi function. BFA blocks the activation of some ADP-ribosylation factors (ARFs). ARFs are small GTPases which regulate vesicular trafficking through the binding of COPs to endosomes and the Golgi. BFA inhibits the function of several guanine nucleotide exchange factors (GEFs) that mediate GTP-binding of ARFs. Treatment of cells with BFA thus disrupts the secretion pathway, promoting disassembly of the Golgi apparatus and distributing Golgi proteins to the endosomes and ER.
https://en.wikipedia.org/wiki?curid=12584
Grace Hopper Grace Brewster Murray Hopper ( December 9, 1906 – January 1, 1992) was an American computer scientist and United States Navy rear admiral. One of the first programmers of the Harvard Mark I computer, she was a pioneer of computer programming who invented one of the first linkers. She popularized the idea of machine-independent programming languages, which led to the development of COBOL, an early high-level programming language still in use today. Prior to joining the Navy, Hopper earned a Ph.D. in mathematics from Yale University and was a professor of mathematics at Vassar College. Hopper attempted to enlist in the Navy during World War II but was rejected because she was 34 years old. She instead joined the Navy Reserves. Hopper began her computing career in 1944 when she worked on the Harvard Mark I team led by Howard H. Aiken. In 1949, she joined the Eckert–Mauchly Computer Corporation and was part of the team that developed the UNIVAC I computer. At Eckert–Mauchly she began developing the compiler. She believed that a programming language based on English was possible. Her compiler converted English terms into machine code understood by computers. By 1952, Hopper had finished her program linker (originally called a compiler), which was written for the A-0 System. During her wartime service, she co-authored three papers based on her work on the Harvard Mark 1. In 1954, Eckert–Mauchly chose Hopper to lead their department for automatic programming, and she led the release of some of the first compiled languages like FLOW-MATIC. In 1959, she participated in the CODASYL consortium, which consulted Hopper to guide them in creating a machine-independent programming language. This led to the COBOL language, which was inspired by her idea of a language being based on English words. In 1966, she retired from the Naval Reserve, but in 1967 the Navy recalled her to active duty. She retired from the Navy in 1986 and found work as a consultant for the Digital Equipment Corporation, sharing her computing experiences. The U.S. Navy guided-missile destroyer was named for her, as was the Cray XE6 "Hopper" supercomputer at NERSC. During her lifetime, Hopper was awarded 40 honorary degrees from universities across the world. A college at Yale University was renamed in her honor. In 1991, she received the National Medal of Technology. On November 22, 2016, she was posthumously awarded the Presidential Medal of Freedom by President Barack Obama. Hopper was born in New York City. She was the eldest of three children. Her parents, Walter Fletcher Murray and Mary Campbell Van Horne, were of Scottish and Dutch descent, and attended West End Collegiate Church. Her great-grandfather, Alexander Wilson Russell, an admiral in the US Navy, fought in the Battle of Mobile Bay during the Civil War. Grace was very curious as a child; this was a lifelong trait. At the age of seven, she decided to determine how an alarm clock worked and dismantled seven alarm clocks before her mother realized what she was doing (she was then limited to one clock). For her preparatory school education, she attended the Hartridge School in Plainfield, New Jersey. Hopper was initially rejected for early admission to Vassar College at age 16 (her test scores in Latin were too low), but she was admitted the following year. She graduated Phi Beta Kappa from Vassar in 1928 with a bachelor's degree in mathematics and physics and earned her master's degree at Yale University in 1930. In 1934, she earned a Ph.D. in mathematics from Yale under the direction of Øystein Ore. Her dissertation, "New Types of Irreducibility Criteria", was published that same year. Hopper began teaching mathematics at Vassar in 1931, and was promoted to associate professor in 1941. She was married to New York University professor Vincent Foster Hopper (1906–1976) from 1930 until their divorce in 1945. She did not marry again, but chose to retain his surname. Hopper had tried to enlist in the Navy early in World War II. She was rejected for a few reasons. At age 34, she was too old to enlist, and her weight to height ratio was too low. She was also denied on the basis that her job as a mathematician and mathematics professor at Vassar College was valuable to the war effort. During the war in 1943, Hopper obtained a leave of absence from Vassar and was sworn into the United States Navy Reserve; she was one of many women who volunteered to serve in the WAVES. She had to get an exemption to enlist; she was below the Navy minimum weight of . She reported in December and trained at the Naval Reserve Midshipmen's School at Smith College in Northampton, Massachusetts. Hopper graduated first in her class in 1944, and was assigned to the Bureau of Ships Computation Project at Harvard University as a lieutenant, junior grade. She served on the Mark I computer programming staff headed by Howard H. Aiken. Hopper and Aiken co-authored three papers on the Mark I, also known as the Automatic Sequence Controlled Calculator. Hopper's request to transfer to the regular Navy at the end of the war was declined due to her advanced age of 38. She continued to serve in the Navy Reserve. Hopper remained at the Harvard Computation Lab until 1949, turning down a full professorship at Vassar in favor of working as a research fellow under a Navy contract at Harvard. In 1949, Hopper became an employee of the Eckert–Mauchly Computer Corporation as a senior mathematician and joined the team developing the UNIVAC I. Hopper also served as UNIVAC director of Automatic Programming Development for Remington Rand. The UNIVAC was the first known large-scale electronic computer to be on the market in 1950, and was more competitive at processing information than the Mark I. When Hopper recommended the development of a new programming language that would use entirely English words, she "was told very quickly that [she] couldn't do this because computers didn't understand English." Still, she persisted. "It's much easier for most people to write an English statement than it is to use symbols," she explained. "So I decided data processors ought to be able to write their programs in English, and the computers would translate them into machine code." Her idea was not accepted for three years. In the meantime, she published her first paper on the subject, compilers, in 1952. In the early 1950s, the company was taken over by the Remington Rand corporation, and it was while she was working for them that her original compiler work was done. The program was known as the A compiler and its first version was A-0. In 1952, she had an operational link-loader, which at the time was referred to as a compiler. She later said that "Nobody believed that," and that she "had a running compiler and nobody would touch it. They told me computers could only do arithmetic." She goes on to say that her compiler "translated mathematical notation into machine code. Manipulating symbols was fine for mathematicians but it was no good for data processors who were not symbol manipulators. Very few people are really symbol manipulators. If they are they become professional mathematicians, not data processors. It's much easier for most people to write an English statement than it is to use symbols. So I decided data processors ought to be able to write their programs in English, and the computers would translate them into machine code. That was the beginning of COBOL, a computer language for data processors. I could say 'Subtract income tax from pay' instead of trying to write that in octal code or using all kinds of symbols. COBOL is the major language used today in data processing." In 1954 Hopper was named the company's first director of automatic programming, and her department released some of the first compiler-based programming languages, including MATH-MATIC and FLOW-MATIC. In the spring of 1959, computer experts from industry and government were brought together in a two-day conference known as the Conference on Data Systems Languages (CODASYL). Hopper served as a technical consultant to the committee, and many of her former employees served on the short-term committee that defined the new language COBOL (an acronym for COmmon Business-Oriented Language). The new language extended Hopper's FLOW-MATIC language with some ideas from the IBM equivalent, COMTRAN. Hopper's belief that programs should be written in a language that was close to English (rather than in machine code or in languages close to machine code, such as assembly languages) was captured in the new business language, and COBOL went on to be the most ubiquitous business language to date. Among the members of the committee that worked on COBOL was Mount Holyoke College alumnus Jean E. Sammet. From 1967 to 1977, Hopper served as the director of the Navy Programming Languages Group in the Navy's Office of Information Systems Planning and was promoted to the rank of captain in 1973. She developed validation software for COBOL and its compiler as part of a COBOL standardization program for the entire Navy. In the 1970s, Hopper advocated for the Defense Department to replace large, centralized systems with networks of small, distributed computers. Any user on any computer node could access common databases located on the network. She developed the implementation of standards for testing computer systems and components, most significantly for early programming languages such as FORTRAN and COBOL. The Navy tests for conformance to these standards led to significant convergence among the programming language dialects of the major computer vendors. In the 1980s, these tests (and their official administration) were assumed by the National Bureau of Standards (NBS), known today as the National Institute of Standards and Technology (NIST). In accordance with Navy attrition regulations, Hopper retired from the Naval Reserve with the rank of commander at age 60 at the end of 1966. She was recalled to active duty in August 1967 for a six-month period that turned into an indefinite assignment. She again retired in 1971 but was again asked to return to active duty in 1972. She was promoted to captain in 1973 by Admiral Elmo R. Zumwalt, Jr. After Republican Representative Philip Crane saw her on a March 1983 segment of "60 Minutes", he championed , a joint resolution originating in the House of Representatives, which led to her promotion on 15 December 1983 to commodore by special Presidential appointment by President Ronald Reagan. She remained on active duty for several years beyond mandatory retirement by special approval of Congress. Effective November 8, 1985, the rank of commodore was renamed rear admiral (lower half) and Hopper became one of the Navy's few female admirals. Following a career that spanned more than 42 years, Admiral Hopper took retirement from the Navy on August 14, 1986. At a celebration held in Boston on the to commemorate her retirement, Hopper was awarded the Defense Distinguished Service Medal, the highest non-combat decoration awarded by the Department of Defense. At the time of her retirement, she was the oldest active-duty commissioned officer in the United States Navy (79 years, eight months and five days), and had her retirement ceremony aboard the oldest commissioned ship in the United States Navy (188 years, nine months and 23 days). (Admirals William D. Leahy, Chester W. Nimitz, Hyman G. Rickover and Charles Stewart were the only other officers in the Navy's history to serve on active duty at a higher age. Leahy and Nimitz served on active duty for life due to their promotions to the rank of fleet admiral.) Following her retirement from the Navy, she was hired as a senior consultant to Digital Equipment Corporation (DEC). Hopper was initially offered a position by Rita Yavinsky, but she insisted on going through the typical formal interview process. She then proposed in jest that she would be willing to accept a position which made her available on alternating Thursdays, exhibited at their museum of computing as a pioneer, in exchange for a generous salary and unlimited expense account. Instead, she was hired as a full-time senior consultant. In this position, Hopper represented the company at industry forums, serving on various industry committees, along with other obligations. She retained that position until her death at age 85 in 1992. At DEC Hopper served primarily as a goodwill ambassador. She lectured widely about the early days of computing, her career, and on efforts that computer vendors could take to make life easier for their users. She visited most of Digital's engineering facilities, where she generally received a standing ovation at the conclusion of her remarks. Although no longer a serving officer, she always wore her Navy full dress uniform to these lectures in defiance of U.S. Department of Defense policy. "The most important thing I've accomplished, other than building the compiler," she said, "is training young people." They come to me, you know, and say, 'Do you think we can do this?' I say, 'Try it.' And I back 'em up. They need that. I keep track of them as they get older and I stir 'em up at intervals so they don't forget to take chances." On New Year's Day 1992, Hopper died in her sleep of natural causes at her home in Arlington, Virginia; she was 85 years of age. She was interred with full military honors in Arlington National Cemetery. Her legacy was an inspiring factor in the creation of the Grace Hopper Celebration of Women in Computing. Held yearly, this conference is designed to bring the research and career interests of women in computing to the forefront.
https://en.wikipedia.org/wiki?curid=12590
GNU Manifesto The GNU Manifesto is a call-to-action by Richard Stallman encouraging participation and support of the GNU Project's goal in developing the GNU free computer operating system. The GNU Manifesto was published in March 1985 in "Dr. Dobb's Journal of Software Tools". It is held in high regard within the free software movement as a fundamental philosophical source. The full text is included with GNU software such as Emacs, and is publicly available. Some parts of the "GNU Manifesto" begun as an announcement of the GNU Project posted by Richard Stallman on September 27, 1983 in form of an email on Usenet newsgroups. The project's aim was to give computer users freedom and control over their computers by collaboratively developing and providing software that is based on Stallman's idea of software freedom (although the written definition had not existed until February 1986). The manifesto was written as a way to familiarize more people with these concepts, and to find more support in form of work, money, programs and hardware. The "GNU Manifesto" has taken its name and full form in 1985 and was updated in minor ways in 1987. The "GNU Manifesto" opens with an explanation of what the GNU Project is, and what is the current, at the time, progress in creation of the GNU operating system. The system, although based on, and compatible with Unix, is meant by the author to have many improvements over it, which are listed in detail in the manifesto. One of the major driving points behind the GNU project, according to Stallman, was the rapid (at the time) trend toward Unix and its various components becoming proprietary (i.e. closed-source and non-libre) software. The manifesto lays a philosophical basis for launching the project, and importance of bringing it to fruition — proprietary software is a way to divide users, who are no longer able to help each other. Stallman refuses to write proprietary software as a sign of solidarity with them. The author provides many reasons for why the project and software freedom is beneficial to users, although he agrees that its wide adoption will make a work of programmer less profitable. Large part of the "GNU Manifesto" is focused on rebutting possible objections to GNU Project's goals. They include the programmer's need to make a living, the issue of advertising and distributing free software, and the perceived need of a profit incentive.
https://en.wikipedia.org/wiki?curid=12592
Gross domestic product Gross domestic product (GDP) is a monetary measure of the market value of all the final goods and services produced in a specific time period. GDP (nominal) per capita does not, however, reflect differences in the cost of living and the inflation rates of the countries; therefore using a basis of GDP per capita at purchasing power parity (PPP) is arguably more useful when comparing living standards between nations, while nominal GDP is more useful comparing national economies on the international market. The OECD defines GDP as "an aggregate measure of production equal to the sum of the gross values added of all resident and institutional units engaged in production and services (plus any taxes, and minus any subsidies, on products not included in the value of their outputs)." An IMF publication states that, "GDP measures the monetary value of final goods and services—that are bought by the final user—produced in a country in a given period of time (say a quarter or a year)." Total GDP can also be broken down into the contribution of each industry or sector of the economy. The ratio of GDP to the total population of the region is the per capita GDP and the same is called Mean Standard of Living. GDP is considered the "world's most powerful statistical indicator of national development and progress". William Petty came up with a basic concept of GDP to attack landlords against unfair taxation during warfare between the Dutch and the English between 1654 and 1676. Charles Davenant developed the method further in 1695. The modern concept of GDP was first developed by Simon Kuznets for a US Congress report in 1934. In this report, Kuznets warned against its use as a measure of welfare (see below under "limitations and criticisms"). After the Bretton Woods conference in 1944, GDP became the main tool for measuring a country's economy. At that time gross national product (GNP) was the preferred estimate, which differed from GDP in that it measured production by a country's citizens at home and abroad rather than its 'resident institutional units' (see OECD definition above). The switch from GNP to GDP in the US was in 1991, trailing behind most other nations. The role that measurements of GDP played in World War II was crucial to the subsequent political acceptance of GDP values as indicators of national development and progress. A crucial role was played here by the US Department of Commerce under Milton Gilbert where ideas from Kuznets were embedded into institutions. The history of the concept of GDP should be distinguished from the history of changes in ways of estimating it. The value added by firms is relatively easy to calculate from their accounts, but the value added by the public sector, by financial industries, and by intangible asset creation is more complex. These activities are increasingly important in developed economies, and the international conventions governing their estimation and their inclusion or exclusion in GDP regularly change in an attempt to keep up with industrial advances. In the words of one academic economist "The actual number for GDP is therefore the product of a vast patchwork of statistics and a complicated set of processes carried out on the raw data to fit them to the conceptual framework." GDP can be determined in three ways, all of which should, theoretically, give the same result. They are the production (or output or value added) approach, the income approach, or the speculated expenditure approach. The most direct of the three is the production approach, which sums the outputs of every class of enterprise to arrive at the total. The expenditure approach works on the principle that all of the product must be bought by somebody, therefore the value of the total product must be equal to people's total expenditures in buying things. The income approach works on the principle that the incomes of the productive factors ("producers," colloquially) must be equal to the value of their product, and determines GDP by finding the sum of all producers' incomes. This approach mirrors the OECD definition given above. Gross value added = gross value of output – value of intermediate consumption. Value of output = value of the total sales of goods and services plus value of changes in the inventory. The sum of the gross value added in the various economic activities is known as "GDP at factor cost". GDP at factor cost plus indirect taxes less subsidies on products = "GDP at producer price". For measuring output of domestic product, economic activities (i.e. industries) are classified into various sectors. After classifying economic activities, the output of each sector is calculated by any of the following two methods: The value of output of all sectors is then added to get the gross value of output at factor cost. Subtracting each sector's intermediate consumption from gross output value gives the GVA (=GDP) at factor cost. Adding indirect tax minus subsidies to GVA (GDP) at factor cost gives the "GVA (GDP) at producer prices". The second way of estimating GDP is to use "the sum of primary incomes distributed by resident producer units". If GDP is calculated this way it is sometimes called gross domestic income (GDI), or GDP (I). GDI should provide the same amount as the expenditure method described later. By definition, GDI is equal to GDP. In practice, however, measurement errors will make the two figures slightly off when reported by national statistical agencies. This method measures GDP by adding incomes that firms pay households for factors of production they hire - wages for labour, interest for capital, rent for land and profits for entrepreneurship. The US "National Income and Expenditure Accounts" divide incomes into five categories: These five income components sum to net domestic income at factor cost. Two adjustments must be made to get GDP: Total income can be subdivided according to various schemes, leading to various formulae for GDP measured by the income approach. A common one is: The sum of COE, GOS and GMI is called total factor income; it is the income of all of the factors of production in society. It measures the value of GDP at factor (basic) prices. The difference between basic prices and final prices (those used in the expenditure calculation) is the total taxes and subsidies that the government has levied or paid on that production. So adding taxes less subsidies on production and imports converts GDP(I) at factor cost to GDP(I) at final prices. Total factor income is also sometimes expressed as: The third way to estimate GDP is to calculate the sum of the final uses of goods and services (all uses except intermediate consumption) measured in purchasers' prices. Market goods which are produced are purchased by someone. In the case where a good is produced and unsold, the standard accounting convention is that the producer has bought the good from themselves. Therefore, measuring the total expenditure used to buy things is a way of measuring production. This is known as the expenditure method of calculating GDP. GDP (Y) is the sum of consumption (C), investment (I), government spending (G) and net exports (X – M). Here is a description of each GDP component: Note that C, G, and I are expenditures on final goods and services; expenditures on intermediate goods and services do not count. (Intermediate goods and services are those used by businesses to produce other goods and services within the accounting year.) According to the U.S. Bureau of Economic Analysis, which is responsible for calculating the national accounts in the United States, "In general, the source data for the expenditures components are considered more reliable than those for the income components [see income method, above]." GDP can be contrasted with gross national product (GNP) or, as it is now known, gross national income (GNI). The difference is that GDP defines its scope according to location, while GNI defines its scope according to ownership. In a global context, world GDP and world GNI are, therefore, equivalent terms. GDP is product produced within a country's borders; GNI is product produced by enterprises owned by a country's citizens. The two would be the same if all of the productive enterprises in a country were owned by its own citizens, and those citizens did not own productive enterprises in any other countries. In practice, however, foreign ownership makes GDP and GNI non-identical. Production within a country's borders, but by an enterprise owned by somebody outside the country, counts as part of its GDP but not its GNI; on the other hand, production by an enterprise located outside the country, but owned by one of its citizens, counts as part of its GNI but not its GDP. For example, the GNI of the USA is the value of output produced by American-owned firms, regardless of where the firms are located. Similarly, if a country becomes increasingly in debt, and spends large amounts of income servicing this debt this will be reflected in a decreased GNI but not a decreased GDP. Similarly, if a country sells off its resources to entities outside their country this will also be reflected over time in decreased GNI, but not decreased GDP. This would make the use of GDP more attractive for politicians in countries with increasing national debt and decreasing assets. Gross national income (GNI) equals GDP plus income receipts from the rest of the world minus income payments to the rest of the world. In 1991, the United States switched from using GNP to using GDP as its primary measure of production. The relationship between United States GDP and GNP is shown in table 1.7.5 of the "National Income and Product Accounts". The international standard for measuring GDP is contained in the book "System of National Accounts" (1993), which was prepared by representatives of the International Monetary Fund, European Union, Organisation for Economic Co-operation and Development, United Nations and World Bank. The publication is normally referred to as SNA93 to distinguish it from the previous edition published in 1968 (called SNA68) SNA93 provides a set of rules and procedures for the measurement of national accounts. The standards are designed to be flexible, to allow for differences in local statistical needs and conditions. Within each country GDP is normally measured by a national government statistical agency, as private sector organizations normally do not have access to the information required (especially information on expenditure and production by governments). The raw GDP figure as given by the equations above is called the nominal, historical, or current, GDP. When one compares GDP figures from one year to another, it is desirable to compensate for changes in the value of money – for the effects of inflation or deflation. To make it more meaningful for year-to-year comparisons, it may be multiplied by the ratio between the value of money in the year the GDP was measured and the value of money in a base year. For example, suppose a country's GDP in 1990 was $100 million and its GDP in 2000 was $300 million. Suppose also that inflation had halved the value of its currency over that period. To meaningfully compare its GDP in 2000 to its GDP in 1990, we could multiply the GDP in 2000 by one-half, to make it relative to 1990 as a base year. The result would be that the GDP in 2000 equals $300 million × one-half = $150 million, "in 1990 monetary terms." We would see that the country's GDP had realistically increased 50 percent over that period, not 200 percent, as it might appear from the raw GDP data. The GDP adjusted for changes in money value in this way is called the real, or constant, GDP. The factor used to convert GDP from current to constant values in this way is called the "GDP deflator". Unlike consumer price index, which measures inflation or deflation in the price of household consumer goods, the GDP deflator measures changes in the prices of all domestically produced goods and services in an economy including investment goods and government services, as well as household consumption goods. Constant-GDP figures allow us to calculate a GDP growth rate, which indicates how much a country's production has increased (or decreased, if the growth rate is negative) compared to the previous year. Another thing that it may be desirable to account for is population growth. If a country's GDP doubled over a certain period, but its population tripled, the increase in GDP may not mean that the standard of living increased for the country's residents; the average person in the country is producing less than they were before. "Per-capita GDP" is a measure to account for population growth. The level of GDP in countries may be compared by converting their value in national currency according to "either" the current currency exchange rate, or the purchasing power parity exchange rate. The ranking of countries may differ significantly based on which method is used. There is a clear pattern of the "purchasing power parity method" decreasing the disparity in GDP between high and low income (GDP) countries, as compared to the "current exchange rate method". This finding is called the Penn effect. For more information, see Measures of national income and output. GDP per capita is often used as an indicator of living standards. The major advantage of GDP per capita as an indicator of standard of living is that it is measured frequently, widely, and consistently. It is measured frequently in that most countries provide information on GDP on a quarterly basis, allowing trends to be seen quickly. It is measured widely in that some measure of GDP is available for almost every country in the world, allowing inter-country comparisons. It is measured consistently in that the technical definition of GDP is relatively consistent among countries. GDP does not include several factors that influence the standard of living. In particular, it fails to account for: It can be argued that GDP per capita as an indicator standard of living is correlated with these factors, capturing them indirectly. As a result, GDP per capita as a standard of living is a continued usage because most people have a fairly accurate idea of what it is and know it is tough to come up with quantitative measures for such constructs as happiness, quality of life, and well-being. Simon Kuznets, the economist who developed the first comprehensive set of measures of national income, stated in his first report to the US Congress in 1934, in a section titled "Uses and Abuses of National Income Measurements": The valuable capacity of the human mind to simplify a complex situation in a compact characterization becomes dangerous when not controlled in terms of definitely stated criteria. With quantitative measurements especially, the definiteness of the result suggests, often misleadingly, a precision and simplicity in the outlines of the object measured. Measurements of national income are subject to this type of illusion and resulting abuse, especially since they deal with matters that are the center of conflict of opposing social groups where the effectiveness of an argument is often contingent upon oversimplification. [...] All these qualifications upon estimates of national income as an index of productivity are just as important when income measurements are interpreted from the point of view of economic welfare. But in the latter case additional difficulties will be suggested to anyone who wants to penetrate below the surface of total figures and market values. Economic welfare cannot be adequately measured unless the personal distribution of income is known. And no income measurement undertakes to estimate the reverse side of income, that is, the intensity and unpleasantness of effort going into the earning of income. The welfare of a nation can, therefore, scarcely be inferred from a measurement of national income as defined above. In 1962, Kuznets stated:Distinctions must be kept in mind between quantity and quality of growth, between costs and returns, and between the short and long run. Goals for more growth should specify more growth of what and for what. Ever since the development of GDP, multiple observers have pointed out limitations of using GDP as the overarching measure of economic and social progress. For example, many environmentalists argue that GDP is a poor measure of social progress because it does not take into account harm to the environment. Although a high or rising level of GDP is often associated with increased economic and social progress within a country, a number of scholars have pointed out that this does not necessarily play out in many instances. For example, Jean Drèze and Amartya Sen have pointed out that an increase in GDP or in GDP growth does not necessarily lead to a higher standard of living, particularly in areas such as healthcare and education. Another important area that does not necessarily improve along with GDP is political liberty, which is most notable in China, where GDP growth is strong yet political liberties are heavily restricted. GDP does not account for the distribution of income among the residents of a country, because GDP is merely an aggregate measure. An economy may be highly developed or growing rapidly, but also contain a wide gap between the rich and the poor in a society. These inequalities often occur on the lines of race, ethnicity, gender, religion, or other minority status within countries. This can lead to misleading characterizations of economic well-being if the income distribution is heavily skewed toward the high end, as the poorer residents will not directly benefit from the overall level of wealth and income generated in their country. Even GDP per capita measures may have the same downside if inequality is high. For example, South Africa during apartheid ranked high in terms of GDP per capita, but the benefits of this immense wealth and income were not shared equally among the country. GDP does not take into account the value of household and other unpaid work. Some, including Martha Nussbaum, argue that this value should be included in measuring GDP, as household labor is largely a substitute for goods and services that would otherwise be purchased for value. Even under conservative estimates, the value of unpaid labor in Australia has been calculated to be over 50% of the country's GDP. A later study analyzed this value in other countries, with results ranging from a low of about 15% in Canada (using conservative estimates) to high of nearly 70% in the United Kingdom (using more liberal estimates). For the United States, the value was estimated to be between about 20% on the low end to nearly 50% on the high end, depending on the methodology being used. Because many public policies are shaped by GDP calculations and by the related field of national accounts, the non-inclusion of unpaid work in calculating GDP can create distortions in public policy, and some economists have advocated for changes in the way public policies are formed and implemented. The UK's Natural Capital Committee highlighted the shortcomings of GDP in its advice to the UK Government in 2013, pointing out that GDP "focuses on flows, not stocks. As a result, an economy can run down its assets yet, at the same time, record high levels of GDP growth, until a point is reached where the depleted assets act as a check on future growth". They then went on to say that "it is apparent that the recorded GDP growth rate overstates the sustainable growth rate. Broader measures of wellbeing and wealth are needed for this and there is a danger that short-term decisions based solely on what is currently measured by national accounts may prove to be costly in the long-term". It has been suggested that countries that have authoritarian governments, such as the People's Republic of China, and Russia, inflate their GDP figures. In response to these and other limitations of using GDP, alternative approaches have emerged.
https://en.wikipedia.org/wiki?curid=12594
Georg Wilhelm Friedrich Hegel Georg Wilhelm Friedrich Hegel (; ; August 27, 1770 – November 14, 1831) was a German philosopher and an important figure of German idealism. He achieved recognition in his day and—while primarily influential in the continental tradition of philosophy—has become increasingly influential in the analytic tradition as well. His canonical stature in Western philosophy is universally recognized. Hegel's principal achievement was his development of a distinctive articulation of idealism, sometimes termed "absolute idealism", in which the dualisms of, for instance, mind and nature and subject and object are overcome. His philosophy of spirit conceptually integrates psychology, the state, history, art, religion and philosophy. His account of the master–slave dialectic has been influential, especially in 20th-century France. Of special importance is his concept of spirit ("Geist", sometimes also translated as "mind") as the historical manifestation of the logical concept – and the "sublation" ("Aufhebung", integration without elimination or reduction) – of seemingly contradictory or opposing factors: examples include the apparent opposition between necessity and freedom and between immanence and transcendence. Hegel has been seen in the twentieth century as the originator of the thesis, antithesis, synthesis triad, but as an explicit phrase it originated with Johann Gottlieb Fichte. Hegel has influenced many thinkers and writers whose own positions vary widely. Karl Barth described Hegel as a "Protestant Aquinas" while Maurice Merleau-Ponty wrote that "all the great philosophical ideas of the past century—the philosophies of Marx and Nietzsche, phenomenology, German existentialism, and psychoanalysis—had their beginnings in Hegel." He was born on August 27, 1770 in Stuttgart, capital of the Duchy of Württemberg in southwestern Germany. Christened Georg Wilhelm Friedrich, he was known as Wilhelm to his close family. His father, Georg Ludwig, was "Rentkammersekretär" (secretary to the revenue office) at the court of Karl Eugen, Duke of Württemberg. Hegel's mother, Maria Magdalena Louisa (née Fromm), was the daughter of a lawyer at the High Court of Justice at the Württemberg court. She died of a "bilious fever" ("Gallenfieber") when Hegel was thirteen. Hegel and his father also caught the disease, but they narrowly survived. Hegel had a sister, Christiane Luise (1773–1832); and a brother, Georg Ludwig (1776–1812), who was to perish as an officer in Napoleon's Russian campaign of 1812. At the age of three, he went to the German School. When he entered the Latin School two years later, he already knew the first declension, having been taught it by his mother. In 1776, he entered Stuttgart's "gymnasium illustre" and during his adolescence read voraciously, copying lengthy extracts in his diary. Authors he read include the poet Friedrich Gottlieb Klopstock and writers associated with the Enlightenment, such as Christian Garve and Gotthold Ephraim Lessing. His studies at the "Gymnasium" were concluded with his "Abiturrede" ("graduation speech") entitled "The abortive state of art and scholarship in Turkey" ("Der verkümmerte Zustand der Künste und Wissenschaften unter den Türken"). At the age of eighteen, Hegel entered the Tübinger Stift (a Protestant seminary attached to the University of Tübingen), where he had as roommates the poet and philosopher Friedrich Hölderlin and the philosopher-to-be Friedrich Wilhelm Joseph Schelling. Sharing a dislike for what they regarded as the restrictive environment of the Seminary, the three became close friends and mutually influenced each other's ideas. All greatly admired Hellenic civilization and Hegel additionally steeped himself in Jean-Jacques Rousseau and Lessing during this time. They watched the unfolding of the French Revolution with shared enthusiasm. Schelling and Hölderlin immersed themselves in theoretical debates on Kantian philosophy, from which Hegel remained aloof. Hegel at this time envisaged his future as that of a "Popularphilosoph", i.e. a "man of letters" who serves to make the abstruse ideas of philosophers accessible to a wider public; his own felt need to engage critically with the central ideas of Kantianism did not come until 1800. Although the violence of the Reign of Terror in 1793 dampened Hegel's hopes, he continued to identify with the moderate Girondin faction and never lost his commitment to the principles of 1789, which he would express by drinking a toast to the storming of the Bastille every fourteenth of July. Having received his theological certificate ("Konsistorialexamen") from the Tübingen Seminary, Hegel became "Hofmeister" (house tutor) to an aristocratic family in Bern (1793–1796). During this period, he composed the text which has become known as the "Life of Jesus" and a book-length manuscript titled "The Positivity of the Christian Religion". His relations with his employers becoming strained, Hegel accepted an offer mediated by Hölderlin to take up a similar position with a wine merchant's family in Frankfurt, to which he relocated in 1797. Here, Hölderlin exerted an important influence on Hegel's thought. While in Frankfurt, Hegel composed the essay "Fragments on Religion and Love". In 1799, he wrote another essay entitled "The Spirit of Christianity and Its Fate", unpublished during his lifetime. Also in 1797, the unpublished and unsigned manuscript of "The Oldest Systematic Program of German Idealism" was written. It was written in Hegel's hand, but thought to have been authored by either Hegel, Schelling, Hölderlin, or an unknown fourth person. In 1801, Hegel came to Jena with the encouragement of his old friend Schelling, who held the position of Extraordinary Professor at the University there. Hegel secured a position at the University as a "Privatdozent" (unsalaried lecturer) after submitting the inaugural dissertation "De Orbitis Planetarum", in which he briefly criticized arguments that assert—based on Bode's Law or other arbitrary choice of mathematical series—there must exist a planet between Mars and Jupiter. Unbeknownst to Hegel, Giuseppe Piazzi had discovered the minor planet Ceres within that orbit on January 1, 1801. Later in the year, Hegel's first book "The Difference Between Fichte's and Schelling's Systems of Philosophy" was completed. He lectured on "Logic and Metaphysics" and gave joint lectures with Schelling on an "Introduction to the Idea and Limits of True Philosophy" and held a "Philosophical Disputorium". In 1802, Schelling and Hegel founded a journal, the "Kritische Journal der Philosophie" ("Critical Journal of Philosophy"), to which they each contributed pieces until the collaboration was ended when Schelling left for Würzburg in 1803. In 1805, the University promoted Hegel to the position of Extraordinary Professor (unsalaried) after he wrote a letter to the poet and minister of culture Johann Wolfgang Goethe protesting at the promotion of his philosophical adversary Jakob Friedrich Fries ahead of him. Hegel attempted to enlist the help of the poet and translator Johann Heinrich Voß to obtain a post at the newly renascent University of Heidelberg, but he failed; to his chagrin, Fries was later in the same year made Ordinary Professor (salaried) there. With his finances drying up quickly, Hegel was now under great pressure to deliver his book, the long-promised introduction to his System. Hegel was putting the finishing touches to this book, "The Phenomenology of Spirit", as Napoleon engaged Prussian troops on 14 October 1806 in the Battle of Jena on a plateau outside the city. On the day before the battle, Napoleon entered the city of Jena. Hegel recounted his impressions in a letter to his friend Friedrich Immanuel Niethammer: I saw the Emperor—this world-soul ["Weltseele"]—riding out of the city on reconnaissance. It is indeed a wonderful sensation to see such an individual, who, concentrated here at a single point, astride a horse, reaches out over the world and masters it. Pinkard (2000) notes that Hegel's comment to Niethammer "is all the more striking since at that point he had already composed the crucial section of the "Phenomenology" in which he remarked that the Revolution had now officially passed to another land (Germany) that would complete 'in thought' what the Revolution had only partially accomplished in practice". Although Napoleon chose not to close down Jena as he had other universities, the city was devastated and students deserted the university in droves, making Hegel's financial prospects even worse. The following February marked the birth of Hegel's illegitimate son, Georg Ludwig Friedrich Fischer (1807–1831), as the result of an affair with Hegel's landlady Christiana Burkhardt née Fischer (who had been abandoned by her husband). In March 1807, Hegel moved to Bamberg, where Niethammer had declined and passed on to Hegel an offer to become editor of a newspaper, the "". Unable to find more suitable employment, Hegel reluctantly accepted. Ludwig Fischer and his mother (whom Hegel may have offered to marry following the death of her husband) stayed behind in Jena. In November 1808, Hegel was again through Niethammer, appointed headmaster of a "Gymnasium" in Nuremberg, a post he held until 1816. While in Nuremberg, Hegel adapted his recently published "Phenomenology of Spirit" for use in the classroom. Part of his remit being to teach a class called "Introduction to Knowledge of the Universal Coherence of the Sciences", Hegel developed the idea of an encyclopedia of the philosophical sciences, falling into three parts (logic, philosophy of nature and philosophy of spirit). In 1811, Hegel married Marie Helena Susanna von Tucher (1791–1855), the eldest daughter of a Senator. This period saw the publication of his second major work, the "Science of Logic" ("Wissenschaft der Logik"; 3 vols., 1812, 1813 and 1816), and the birth of his two legitimate sons, Karl Friedrich Wilhelm (1813–1901) and Immanuel Thomas Christian (1814–1891). Having received offers of a post from the Universities of Erlangen, Berlin and Heidelberg, Hegel chose Heidelberg, where he moved in 1816. Soon after, his illegitimate son Ludwig Fischer (now ten years old) joined the Hegel household in April 1817, having thus far spent his childhood in an orphanage as his mother had died in the meantime. Hegel published "The Encyclopedia of the Philosophical Sciences in Outline" (1817) as a summary of his philosophy for students attending his lectures at Heidelberg. In 1818, Hegel accepted the renewed offer of the chair of philosophy at the University of Berlin, which had remained vacant since Johann Gottlieb Fichte's death in 1814. Here, Hegel published his "Philosophy of Right" (1821). Hegel devoted himself primarily to delivering his lectures; and his lecture courses on aesthetics, the philosophy of religion, the philosophy of history and the history of philosophy were published posthumously from lecture notes taken by his students. His fame spread and his lectures attracted students from all over Germany and beyond. In 1819–1827, he made trips to Weimar (twice), where he met Goethe, to Brussels, the Northern Netherlands, Leipzig, Vienna through Prague and Paris. Hegel was appointed Rector of the University in October 1829, but his term as Rector ended in September 1830. Hegel was deeply disturbed by the riots for reform in Berlin in that year. In 1831, Frederick William III decorated him with the Order of the Red Eagle, 3rd Class for his service to the Prussian state. In August 1831, a cholera epidemic reached Berlin and Hegel left the city, taking up lodgings in Kreuzberg. Now in a weak state of health, Hegel seldom went out. As the new semester began in October, Hegel returned to Berlin with the (mistaken) impression that the epidemic had largely subsided. By November 14, Hegel was dead. The physicians pronounced the cause of death as cholera, but it is likely he died from a different gastrointestinal disease. His last words are said to have been, "There was only one man who ever understood me, and even he didn't understand me." He was buried on November 16. In accordance with his wishes, Hegel was buried in the Dorotheenstadt cemetery next to Fichte and Karl Wilhelm Ferdinand Solger. Hegel's son Ludwig Fischer had died shortly before while serving with the Dutch army in Batavia and the news of his death never reached his father. Early the following year, Hegel's sister Christiane committed suicide by drowning. Hegel's remaining two sons—Karl, who became a historian; and , who followed a theological path—lived long and safeguarded their father's "Nachlaß" and produced editions of his works. From the time of Leibniz to the widespread adoption of Frege's logic in the 1930s, every standard work on logic consisted of three divisions: doctrines of concept, judgment, and inference. Doctrines of concept address the systematic, hierarchical relations of the most general classes of things; doctrines of judgment investigate relations of subject and predicate; and doctrines of inference lay out the forms of syllogisms originally found in Aristotelian term logic"."Indeed, “logic” in the field of nineteenth-century continental philosophy takes on a range of meanings from “metaphysics” to “theory of science,” from “critical epistemology” to “first philosophy.” And debates about the nature of logic were intertwined with competition to inherit the mantle of Kant and with it the future direction of German philosophy. Each new logic book staked a new claim in a century-long expansionist turf war among philosophical trends.With the possible exception of the study of inference, what was called "logic" in nineteenth-century Europe (and so Hegel's "Logic") bears little resemblance to what logicians study today. Logic, particularly the doctrine of the concept, was metaphysics; it was the search for a fundamental ontological structure within the relations of the most basic predicates (quantity, time, place etc.), a practice that goes back to Plato's "Sophist" and Aristotle's "Categories." This research program took on new meaning with the 1781 publication of Kant's "Critique of Pure Reason". Kant derives his own table of categories (the twelve pure or "ancestral" concepts of the understanding that structure all experience irrespective of content) from a standard term-logical table of judgments, noting also that...the true ancestral concepts...also have their equally pure derivative concepts, which could by no means be passed over in a complete system of transcendental philosophy, but with the mere mention of which I can be satisfied in a merely critical essay.The "Science of Logic" (which the latter Hegel considered central to his philosophy) can be considered a notable contribution to the research program of category metaphysics in its post-Kantian form, taking up the project that Kants in the above passage suggests is necessary but does not himself pursue: "to take note of and, as far as possible, completely catalog" the derivative concepts of the pure understanding and "completely illustrate its family tree." The affinity between Hegel and Kant's logics ("speculative" and "transcendental" respectively) is apparent in their vocabulary—for instance, Kant speaks of "Entstehen" (coming-to-be) and "Vergehen" (ceasing-to-be), the same two terms that Hegel uses to refer to the two compositional elements of "Werden" (becoming). Kant uses the term "Veränderung" (change) here instead of "Werden", however, and the designation of ontological categories by name is itself a complex topic. And although the "Logic"'s table of contents minimally resembles Kant's table of categories, the four headings of Kant's table (quantity, quality, relation, and modality) do not play, in Hegel's dialectic, the organizational role that Kant had in mind for them, and Hegel ultimately faults Kant for copying the table of judgments from the "modern compendiums of logic" whose subject matter is, Hegel says, in need of "total reconstruction." So how "are" the categories derived? Hegel writes that...profounder insight into the antinomial, or more truly into the dialectical nature of reason demonstrates "any" Concept ["Begriff"] whatsoever to be a unity of opposed elements ["Momente"] to which, therefore, the form of antinomial assertions could be given.In other words, because every concept is a composite of contraries (value is black and white, temperature is hot and cold, etc.), all the pure concepts of the understanding are immanently contained within the most abstract concept; the entire tree of the concepts of the pure understanding unfolds from a single concept the way a tree grows from a seed. For this reason, Hegel's "Logic" begins with the summum genus, "Being, pure Being," ("and "God" has the absolutely undisputed right that the beginning be made with him") from which are derived more concrete concepts such as becoming, determinate being, something, and infinity. The precise nature of the procedural self-concretization that drives Hegel's "Logic" is still the subject of controversy. Scholars such as Clark Butler hold that a good portion of the "Logic" is formalizable, proceeding deductively via indirect proof. Others, such as Hans-Georg Gadamer, believe that Hegel's course in the "Logic" is determined primarily by the associations of ordinary words in the German language. However, regardless of its status as a formal logic, Hegel clearly understood the course of his logic to be reflected in the course of history (albeit inexactly, the way a child resembles its parents):...different stages of the logical Idea assume the shape of successive systems, each based on a particular definition of the Absolute. As the logical Idea is seen to unfold itself in a process from the abstract to the concrete, so in the history of philosophy the earliest systems are the most abstract, and thus at the same time the poorest...Thus, Hegel's categories are at least in part carried over from his Lectures on the History of Philosophy. For example: Parmenides takes pure being to be the absolute; Gorgias replaces it with pure nothing; Heraclitus replaces both being and nothing with becoming (which is a unity of two contraries: coming-to-be and ceasing-to-be). Hegel understands the history of philosophy to be a trans-historical socratic argument concerning the identity of the absolute. That history should resemble this dialectic indicates to Hegel that history is something "rational". For both Hegel and Kant, "we arrive at the concept of the thing in itself by removing, or abstracting from, everything in our experiences of objects of which we can become conscious." Surprisingly, Hegel faults Kant for not carrying this abstracting procedure far enough; the thing-in-itself is an arbitrary pitstop on the way to "nothing", pure indeterminacy:If we abstract 'Ding' ["thing"] from 'Ding-an-sich' ["thing-in-itself"], we get one of Hegel's standard phrases: 'an sich.' ["in itself"]...A child, in Hegel's example, is thus 'in itself' the adult it will become: to know what a 'child' is means to know that it is, in some respects, a vacancy which will only gain content after it has grown out of childhood.The thing as it is in itself is indeed knowable: it is the indeterminate, "futural" aspect of the thing we experience––it is what we will come to know. In other words—although the thing-in-itself is at any given moment thoroughly unknown, it nevertheless remains that part of the thing about which it is presently possible to learn more. Karen Ng writes that "there is a central, recurring rhetorical device that Hegel returns to again and again throughout his philosophical system: that of describing the activity of reason and thought in terms of the dynamic activity and development of organic life." Hegel goes so far as to include the concept of life as a category in his "Science of" "Logic", likely inspired by Aristotle's emphasis on teleology, as well as Kant's treatment of "Naturzweck" (natural purposiveness) in the "Critique of Judgment". The speculative identity of mind and nature suggests that reason and history progress in the direction of the Absolute by traversing various stages of relative immaturity, just like a sapling or a child, overcoming necessary setbacks and obstacles along the way (see Progress below). The structure of Hegel's "Logic" appears to exhibit self-similarity, with sub-sections, in their treatment of more specific subject matter, resembling the treatment of the whole. Hegel's concept of "Aufhebung", by which parts are preserved and repurposed within the whole, anticipates the concept of emergence in contemporary systems theory and evolutionary biology. Hegel's system is presented in the form of a Sierpiński triangle. Hegel's thinking can be understood as a constructive development within the broad tradition that includes Plato and Immanuel Kant. To this list, one could add Proclus, Meister Eckhart, Gottfried Wilhelm Leibniz, Plotinus, Jakob Böhme, and Jean-Jacques Rousseau. What all these thinkers share, which distinguishes them from materialists like Epicurus and Thomas Hobbes and from empiricists like David Hume, is that they regard freedom or self-determination both as real and as having important ontological implications for soul or mind or divinity. This focus on freedom is what generates Plato's notion (in the "Phaedo", "Republic" and "Timaeus") of the soul as having a higher or fuller kind of reality than inanimate objects possess. While Aristotle criticizes Plato's "Forms", he preserves Plato's cornerstones of the ontological implications for self-determination: ethical reasoning, the soul's pinnacle in the hierarchy of nature, the order of the cosmos and an assumption with reasoned arguments for a prime mover. Kant imports Plato's high esteem of individual sovereignty to his considerations of moral and noumenal freedom as well as to God. All three find common ground on the unique position of humans in the scheme of things, known by the discussed categorical differences from animals and inanimate objects. In his discussion of "Spirit" in his "Encyclopedia", Hegel praises Aristotle's "On the Soul" as "by far the most admirable, perhaps even the sole, work of philosophical value on this topic". In his "Phenomenology of Spirit" and his "Science of Logic", Hegel's concern with Kantian topics such as freedom and morality and with their ontological implications is pervasive. Rather than simply rejecting Kant's dualism of freedom versus nature, Hegel aims to subsume it within "true infinity", the "Concept" (or "Notion": "Begriff"), "Spirit" and "ethical life" in such a way that the Kantian duality is rendered intelligible, rather than remaining a brute "given". The reason why this subsumption takes place in a series of concepts is that Hegel's method in his "Science of Logic" and his "Encyclopedia" is to begin with basic concepts like "Being" and "Nothing" and to develop these through a long sequence of elaborations, including those already mentioned. In this manner, a solution that is reached in principle in the account of "true infinity" in the "Science of Logic"'s chapter on "Quality" is repeated in new guises at later stages, all the way to "Spirit" and "ethical life" in the third volume of the "Encyclopedia". In this way, Hegel intends to defend the germ of truth in Kantian dualism against reductive or eliminative programs like those of materialism and empiricism. Like Plato, with his dualism of soul versus bodily appetites, Kant pursues the mind's ability to question its felt inclinations or appetites and to come up with a standard of "duty" (or, in Plato's case, "good") which transcends bodily restrictiveness. Hegel preserves this essential Platonic and Kantian concern in the form of infinity going beyond the finite (a process that Hegel in fact relates to "freedom" and the "ought"), the universal going beyond the particular (in the Concept) and Spirit going beyond Nature. Hegel renders these dualities intelligible by (ultimately) his argument in the "Quality" chapter of the "Science of Logic". The finite has to become infinite in order to achieve reality. The idea of the absolute excludes multiplicity so the subjective and objective must achieve synthesis to become whole. This is because as Hegel suggests by his introduction of the concept of "reality", what determines itself—rather than depending on its relations to other things for its essential character—is more fully "real" (following the Latin etymology of "real", more "thing-like") than what does not. Finite things do not determine themselves because as "finite" things their essential character is determined by their boundaries over against other finite things, so in order to become "real" they must go beyond their finitude ("finitude is only as a transcending of itself"). The result of this argument is that finite and infinite—and by extension, particular and universal, nature and freedom—do not face one another as two independent realities, but instead the latter (in each case) is the self-transcending of the former. Rather than stress the distinct singularity of each factor that complements and conflicts with others—without explanation—the relationship between finite and infinite (and particular and universal and nature and freedom) becomes intelligible as a progressively developing and self-perfecting whole. The mystical writings of Jakob Böhme had a strong effect on Hegel. Böhme had written that the Fall of Man was a necessary stage in the evolution of the universe. This evolution was itself the result of God's desire for complete self-awareness. Hegel was fascinated by the works of Kant, Rousseau and Johann Wolfgang Goethe and by the French Revolution. Modern philosophy, culture and society seemed to Hegel fraught with contradictions and tensions, such as those between the subject and object of knowledge, mind and nature, self and Other, freedom and authority, knowledge and faith, or the Enlightenment and Romanticism. Hegel's main philosophical project was to take these contradictions and tensions and interpret them as part of a comprehensive, evolving, rational unity that in different contexts he called "the absolute Idea" ("Science of Logic", sections 1781–1783) or "absolute knowledge" ("Phenomenology of Spirit", "(DD) Absolute Knowledge"). According to Hegel, the main characteristic of this unity was that it evolved through and manifested itself in contradiction and negation. Contradiction and negation have a dynamic quality that at every point in each domain of reality—consciousness, history, philosophy, art, nature and society—leads to further development until a rational unity is reached that preserves the contradictions as phases and sub-parts by lifting them up ("Aufhebung") to a higher unity. This whole is mental because it is mind that can comprehend all of these phases and sub-parts as steps in its own process of comprehension. It is rational because the same, underlying, logical, developmental order underlies every domain of reality and is ultimately the order of self-conscious rational thought, although only in the later stages of development does it come to full self-consciousness. The rational, self-conscious whole is not a thing or being that lies outside of other existing things or minds. Rather, it comes to completion only in the philosophical comprehension of individual existing human minds who through their own understanding bring this developmental process to an understanding of itself. Hegel's thought is revolutionary to the extent that it is a philosophy of absolute negation—as long as absolute negation is at the center, systematization remains open, and makes it possible for human beings to become subjects. "Mind" and "Spirit" are the common English translations of Hegel's use of the German "Geist". Some have argued that either of these terms overly "psychologize" Hegel, implying a kind of disembodied, solipsistic consciousness like ghost or "soul". Geist combines the meaning of spirit—as in god, ghost, or mind—with an intentional force. In Hegel's early philosophy of nature (draft manuscripts written during his time at the University of Jena), Hegel's notion of "Geist" was tightly bound to the notion of "Aether", from which Hegel also derived the concepts of space and time, but in his later works (after Jena) he did not explicitly use his old notion of "Aether" anymore. Central to Hegel's conception of knowledge and mind (and therefore also of reality) was the notion of identity in difference—that is, that mind externalizes itself in various forms and objects that stand outside of it or opposed to it; and that through recognizing itself in them, is "with itself" in these external manifestations so that they are at one and the same time mind and other-than-mind. This notion of identity in difference, which is intimately bound up with his conception of contradiction and negativity, is a principal feature differentiating Hegel's thought from that of other philosophers. Hegel made the distinction between civil society and state in his "Elements of the Philosophy of Right". In this work, civil society (Hegel used the term ""bürgerliche Gesellschaft"" though it is now referred to as "Zivilgesellschaft" in German to emphasize a more inclusive community) was a stage in the dialectical relationship that occurs between Hegel's perceived opposites, the macro-community of the state and the micro-community of the family. Broadly speaking, the term was split, like Hegel's followers, to the political left and right. On the left, it became the foundation for Karl Marx's civil society as an economic base; to the right, it became a description for all non-state (and the state is the peak of the objective spirit) aspects of society, including culture, society and politics. This liberal distinction between political society and civil society was followed by Alexis de Tocqueville. In fact, Hegel's distinctions as to what he meant by civil society are often unclear. For example, while it seems to be the case that he felt that a civil society such as the German society in which he lived was an inevitable movement of the dialectic, he made way for the crushing of other types of "lesser" and not fully realized types of civil society as these societies were not fully conscious or aware—as it were—as to the lack of progress in their societies. Thus, it was perfectly legitimate in the eyes of Hegel for a conqueror such as Napoleon to come along and destroy that which was not fully realized. Hegel's State is the final culmination of the embodiment of freedom or right ("Rechte") in the "Elements of the Philosophy of Right." The State subsumes family and civil society and fulfills them. All three together are called "ethical life" ("Sittlichkeit"). The State involves three "moments". In a Hegelian State, citizens both know their place and choose their place. They both know their obligations and choose to fulfill their obligations. An individual's "supreme duty is to be a member of the state" ("Elements of the Philosophy of Right", section 258). The individual has "substantial freedom in the state". The State is "objective spirit" so "it is only through being a member of the state that the individual himself has objectivity, truth, and ethical life" (section 258). Furthermore, every member both loves the State with genuine patriotism, but has transcended mere "team spirit" by reflectively endorsing their citizenship. Members of a Hegelian State are happy even to sacrifice their lives for the State. According to Hegel, "Heraclitus is the one who first declared the nature of the infinite and first grasped nature as in itself infinite, that is, its essence as process. The origin of philosophy is to be dated from Heraclitus. His is the persistent Idea that is the same in all philosophers up to the present day, as it was the Idea of Plato and Aristotle". For Hegel, Heraclitus's great achievements were to have understood the nature of the infinite, which for Hegel includes understanding the inherent contradictoriness and negativity of reality; and to have grasped that reality is becoming or process and that "being" and "nothingness" are mere empty abstractions. According to Hegel, Heraclitus's "obscurity" comes from his being a true (in Hegel's terms "speculative") philosopher who grasped the ultimate philosophical truth and therefore expressed himself in a way that goes beyond the abstract and limited nature of common sense and is difficult to grasp by those who operate within common sense. Hegel asserted that in Heraclitus he had an antecedent for his logic: "[...] there is no proposition of Heraclitus which I have not adopted in my logic". Hegel cites a number of fragments of Heraclitus in his "Lectures on the History of Philosophy". One to which he attributes great significance is the fragment he translates as "Being is not more than Non-being", which he interprets to mean the following: "Sein und Nichts sei dasselbe"Being and non-being are the same. Heraclitus does not form any abstract nouns from his ordinary use of "to be" and "to become" and in that fragment seems to be opposing any identity A to any other identity B, C and so on, which is not-A. However, Hegel interprets not-A as not existing at all, not nothing at all, which cannot be conceived, but indeterminate or "pure" being without particularity or specificity. Pure being and pure non-being or nothingness are for Hegel pure abstractions from the reality of becoming and this is also how he interprets Heraclitus. This interpretation of Heraclitus cannot be ruled out, but even if present is not the main gist of his thought. For Hegel, the inner movement of reality is the process of God thinking as manifested in the evolution of the universe of nature and thought; that is, Hegel argued that when fully and properly understood, reality is being thought by God as manifested in a person's comprehension of this process in and through philosophy. Since human thought is the image and fulfillment of God's thought, God is not ineffable (so incomprehensible as to be unutterable), but can be understood by an analysis of thought and reality. Just as humans continually correct their concepts of reality through a dialectical process, so God himself becomes more fully manifested through the dialectical process of becoming. For his god, Hegel does not take the logos of Heraclitus but refers rather to the nous of Anaxagoras, although he may well have regarded them the same as he continues to refer to god's plan, which is identical to God. Whatever the nous thinks at any time is actual substance and is identical to limited being, but more remains to be thought in the substrate of non-being, which is identical to pure or unlimited thought. The universe as becoming is therefore a combination of being and non-being. The particular is never complete in itself, but to find completion is continually transformed into more comprehensive, complex, self-relating particulars. The essential nature of being-for-itself is that it is free "in itself;" that is, it does not depend on anything else such as matter for its being. The limitations represent fetters, which it must constantly be casting off as it becomes freer and more self-determining. Although Hegel began his philosophizing with commentary on the Christian religion and often expresses the view that he is a Christian, his ideas of God are not acceptable to some Christians even though he has had a major influence on 19th- and 20th-century theology. As a graduate of a Protestant seminary, Hegel's theological concerns were reflected in many of his writings and lectures. Hegel's thoughts on the person of Jesus Christ stood out from the theologies of the Enlightenment. In his posthumously published "Lectures on the Philosophy of Religion, Part 3", Hegel is shown as being particularly interested with the demonstrations of God's existence and the ontological proof. He espouses that "God is not an abstraction but a concrete God [...] God, considered in terms of his eternal Idea, has to generate the Son, has to distinguish himself from himself; he is the process of differentiating, namely, love and Spirit". This means that Jesus as the Son of God is posited by God over against himself as other. Hegel sees both a relational unity and a metaphysical unity between Jesus and God the Father. To Hegel, Jesus is both divine and human. Hegel further attests that God (as Jesus) not only died, but "[...] rather, a reversal takes place: God, that is to say, maintains himself in the process, and the latter is only the death of death. God rises again to life, and thus things are reversed". The philosopher Walter Kaufmann has argued that there was great stress on the sharp criticisms of traditional Christianity appearing in Hegel's so-called early theological writings. Kaufmann admits that Hegel treated many distinctively Christian themes and "sometimes could not resist equating" his conception of spirit (Geist) "with God, instead of saying clearly: in God I do not believe; spirit suffices me". Kaufmann also points out that Hegel's references to God or to the divine—and also to spirit—drew on classical Greek as well as Christian connotations of the terms. Kaufmann goes on: Aside to his beloved Greeks, Hegel saw before him the example of Spinoza and, in his own time, the poetry of Goethe, Schiller, and Hölderlin, who also liked to speak of gods and the divine. So he, too, sometimes spoke of God and, more often, of the divine; and because he occasionally took pleasure in insisting that he was really closer to this or that Christian tradition than some of the theologians of his time, he has sometimes been understood to have been a Christian. According to Hegel himself, his philosophy was consistent with Christianity. This led Hegelian philosopher, jurist and politician Carl Friedrich Göschel (1784–1861) to write a treatise demonstrating the consistency of Hegel's philosophy with the Christian doctrine of the immortality of the human soul. Göschel's book on this subject was titled "Von den Beweisen für die Unsterblichkeit der menschlichen Seele im Lichte der spekulativen Philosophie: eine Ostergabe" (Berlin: Verlag von Duncker und Humblot, 1835). Hegel seemed to have an ambivalent relationship with magic, myth and Paganism. He formulates an early philosophical example of a disenchantment narrative, arguing that Judaism was responsible both for realizing the existence of "Geist" and, by extension, for separating nature from ideas of spiritual and magical forces and challenging polytheism. However, Hegel's manuscript "The Oldest Systematic Program of German Idealism" suggests that Hegel was concerned about the perceived decline in myth and enchantment in his age, and that he therefore called for a "new myth" to fill the cultural vacuum. Hegel continued to develop his thoughts on religion both in terms of how it was to be given a 'wissenschaftlich', or "theoretically rigorous," account in the context of his own "system," and, most importantly, with how a fully modern religion could be understood. Hegel published four works during his lifetime: During the last ten years of his life, Hegel did not publish another book, but thoroughly revised the "Encyclopedia" (second edition, 1827; third, 1830). In his political philosophy, he criticized Karl Ludwig von Haller's reactionary work, which claimed that laws were not necessary. He also published some articles early in his career and during his Berlin period. A number of other works on the philosophy of history, religion, aesthetics and the history of philosophy were compiled from the lecture notes of his students and published posthumously. There are views of Hegel's thought as a representation of the summit of early 19th-century Germany's movement of philosophical idealism. It would come to have a profound impact on many future philosophical schools, including schools that opposed Hegel's specific dialectical idealism, such as existentialism, the historical materialism of Marx, historism and British Idealism. Hegel's influence was immense both in philosophy and in the other sciences. Throughout the 19th century many chairs of philosophy around Europe were held by Hegelians and Søren Kierkegaard, Ludwig Feuerbach, Karl Marx and Friedrich Engels—among many others—were all deeply influenced by, but also strongly opposed to many of the central themes of Hegel's philosophy. Scholars continue to find and point out Hegelian influences and approaches in a wide range of theoretical and/or learned works, such as Carl von Clausewitz's "magnum opus" on strategic thought, "On War" (1831). After less than a generation, Hegel's philosophy was suppressed and even banned by the Prussian right-wing and was firmly rejected by the left-wing in multiple official writings. After the period of Bruno Bauer, Hegel's influence waned until the philosophy of British Idealism and the 20th-century Hegelian Western Marxism that began with György Lukács. In the United States, Hegel's influence is evident in pragmatism. The more recent movement of communitarianism has a strong Hegelian influence. Some of Hegel's writing was intended for those with advanced knowledge of philosophy, although his "Encyclopedia" was intended as a textbook in a university course. Nevertheless, Hegel assumes that his readers are well-versed in Western philosophy. Especially crucial are Aristotle, Immanuel Kant and Kant's immediate successors, most prominently Johann Gottlieb Fichte and Friedrich Wilhelm Joseph Schelling. Those without this background would be well-advised to begin with one of the many general introductions to his thought. As is always the case, difficulties are magnified for those reading him in translation. In fact, Hegel himself argues in his "Science of Logic" that the German language was particularly conducive to philosophical thought. According to Walter Kaufmann, the basic idea of Hegel's works, especially the "Phenomenology of Spirit", is that a philosopher should not "confine him or herself to views that have been held but penetrate these to the human reality they reflect". In other words, it is not enough to consider propositions, or even the content of consciousness; "it is worthwhile to ask in every instance what kind of spirit would entertain such propositions, hold such views, and have such a consciousness. Every outlook in other words, is to be studied not merely as an academic possibility but as an existential reality". Kaufmann has argued that as unlikely as it may sound, it is not the case that Hegel was unable to write clearly, but that Hegel felt that "he must and should not write in the way in which he was gifted". Some historians have spoken of Hegel's influence as represented by two opposing camps. The Right Hegelians, the allegedly direct disciples of Hegel at the Friedrich-Wilhelms-Universität, advocated a Protestant orthodoxy and the political conservatism of the post-Napoleon Restoration period. Today this faction continues among conservative Protestants, such as the Wisconsin Evangelical Lutheran Synod, which was founded by missionaries from Germany when the Hegelian Right was active. The Left Hegelians, also known as the Young Hegelians, interpreted Hegel in a revolutionary sense, leading to an advocation of atheism in religion and liberal democracy in politics. In more recent studies, this paradigm has been questioned. No Hegelians of the period ever referred to themselves as "Right Hegelians", which was a term of insult originated by David Strauss, a self-styled Left Hegelian. Critiques of Hegel offered from the Left Hegelians radically diverted Hegel's thinking into new directions and eventually came to form a disproportionately large part of the literature on and about Hegel. The Left Hegelians also influenced Marxism, which has in turn inspired global movements, from the Russian Revolution, the Chinese Revolution and myriad of practices up until the present moment. 20th-century interpretations of Hegel were mostly shaped by British idealism, logical positivism, Marxism and Fascism. According to Benedetto Croce, the Italian Fascist Giovanni Gentile "holds the honor of having been the most rigorous neo-Hegelian in the entire history of Western philosophy and the dishonor of having been the official philosopher of Fascism in Italy". However, since the fall of the Soviet Union a new wave of Hegel scholarship arose in the West without the preconceptions of the prior schools of thought. and Otto Pöggeler in Germany as well as Peter Hodgson and Howard Kainz in the United States are notable for their recent contributions to post-Soviet Union thinking about Hegel. In previous modern accounts of Hegelianism (to undergraduate classes, for example), especially those formed prior to the Hegel renaissance, Hegel's dialectic was most often characterized as a three-step process, "thesis, antithesis, synthesis"; namely, that a "thesis" (e.g. the French Revolution) would cause the creation of its "antithesis" (e.g. the Reign of Terror that followed) and would eventually result in a "synthesis" (e.g. the constitutional state of free citizens). However, Hegel used this classification only once and he attributed the terminology to Kant. The terminology was largely developed earlier by Fichte. It was spread by Heinrich Moritz Chalybäus in accounts of Hegelian philosophy and since then the terms have been used as descriptive of this type of framework. The "thesis–antithesis–synthesis" approach gives the sense that things or ideas are contradicted or opposed by things that come from outside them. To the contrary, the fundamental notion of Hegel's dialectic is that things or ideas have internal contradictions. From Hegel's point of view, analysis or comprehension of a thing or idea reveals that underneath its apparently simple identity or unity is an underlying inner contradiction. This contradiction leads to the dissolution of the thing or idea in the simple form in which it presented itself and to a higher-level, more complex thing or idea that more adequately incorporates the contradiction. The triadic form that appears in many places in Hegel (e.g. being–nothingness–becoming, immediate–mediate–concrete and abstract–negative–concrete) is about this movement from inner contradiction to higher-level integration or unification. For Hegel, reason is but "speculative", not "dialectical". Believing that the traditional description of Hegel's philosophy in terms of thesis–antithesis–synthesis was mistaken, a few scholars like Raya Dunayevskaya have attempted to discard the triadic approach altogether. According to their argument, although Hegel refers to "the two elemental considerations: first, the idea of freedom as the absolute and final aim; secondly, the means for realising it, i.e. the subjective side of knowledge and will, with its life, movement, and activity" (thesis and antithesis), he does not use "synthesis", but instead speaks of the "Whole": "We then recognised the State as the moral Whole and the Reality of Freedom, and consequently as the objective unity of these two elements". Furthermore, in Hegel's language the "dialectical" aspect or "moment" of thought and reality, by which things or thoughts turn into their opposites or have their inner contradictions brought to the surface, what he called "Aufhebung", is only preliminary to the "speculative" (and not "synthesizing") aspect or "moment", which grasps the unity of these opposites or contradiction. It is widely admitted today that the old-fashioned description of Hegel's philosophy in terms of thesis–antithesis–synthesis is inaccurate. Nevertheless, such is the persistence of this misnomer that the model and terminology survive in a number of scholarly works. In the last half of the 20th century, Hegel's philosophy underwent a major renaissance. This was due to (a) the rediscovery and re-evaluation of Hegel as a possible philosophical progenitor of Marxism by philosophically oriented Marxists; (b) a resurgence of the historical perspective that Hegel brought to everything; and (c) an increasing recognition of the importance of his dialectical method. György Lukács' "History and Class Consciousness" (1923) helped to reintroduce Hegel into the Marxist canon. This sparked a renewed interest in Hegel reflected in the work of Herbert Marcuse, Theodor W. Adorno, Ernst Bloch, Raya Dunayevskaya, Alexandre Kojève and Gotthard Günther among others. In "Reason and Revolution" (1941), Herbert Marcuse made the case for Hegel as a revolutionary and criticized Leonard Trelawny Hobhouse's thesis that Hegel was a totalitarian. The Hegel renaissance also highlighted the significance of Hegel's early works, i.e. those written before "The Phenomenology of Spirit". The direct and indirect influence of Kojève's lectures and writings (on "The Phenomenology of Spirit" in particular) mean that it is not possible to understand most French philosophers from Jean-Paul Sartre to Jacques Derrida without understanding Hegel. American neoconservative political theorist Francis Fukuyama's controversial book "The End of History and the Last Man" (1992) was heavily influenced by Kojève. The Swiss theologian Hans Küng has also advanced contemporary scholarship in Hegel studies. Beginning in the 1960s, Anglo-American Hegel scholarship has attempted to challenge the traditional interpretation of Hegel as offering a metaphysical system: this has also been the approach of Z. A. Pelczynski and Shlomo Avineri. This view, sometimes referred to as the "non-metaphysical option", has had a decided influence on many major English language studies of Hegel in the past forty years. Late 20th-century literature in Western Theology that is friendly to Hegel includes works by such writers as Walter Kaufmann (1966), Dale M. Schlitt (1984), Theodore Geraets (1985), Philip M. Merklinger (1991), Stephen Rocker (1995) and Cyril O'Regan (1995). Two prominent American philosophers, John McDowell and Robert Brandom (sometimes referred to as the "Pittsburgh Hegelians"), have produced philosophical works exhibiting a marked Hegelian influence. Each is avowedly influenced by the late Wilfred Sellars, also of Pittsburgh, who referred to his seminal work "Empiricism and the Philosophy of Mind" (1956) as a series of "incipient "Méditations Hegeliennes"" (in homage to Edmund Husserl's 1931 work, "Méditations cartésiennes"). In a separate Canadian context, James Doull's philosophy is deeply Hegelian. Beginning in the 1990s after the fall of the Soviet Union, a fresh reading of Hegel took place in the West. For these scholars, fairly well represented by the Hegel Society of America and in cooperation with German scholars such as Otto Pöggeler and Walter Jaeschke, Hegel's works should be read without preconceptions. Marx plays little-to-no role in these new readings. American philosophers associated with this movement include Lawrence Stepelevich, Rudolf Siebert, Richard Dien Winfield and Theodore Geraets. Criticism of Hegel has been widespread in the 19th and the 20th centuries. A diverse range of individuals including Arthur Schopenhauer, Karl Marx, Søren Kierkegaard, Friedrich Nietzsche, Bertrand Russell, G. E. Moore, Franz Rosenzweig, Eric Voegelin and A. J. Ayer have challenged Hegelian philosophy from a variety of perspectives. Among the first to take a critical view of Hegel's system was the 19th-century German group known as the Young Hegelians, which included Feuerbach, Marx, Engels and their followers. In Britain, the Hegelian British idealism school (members of which included Francis Herbert Bradley, Bernard Bosanquet and in the United States Josiah Royce) was challenged and rejected by analytic philosophers Moore and Russell. In particular, Russell considered "almost all" of Hegel's doctrines to be false. Regarding Hegel's interpretation of history, Russell commented: "Like other historical theories, it required, if it was to be made plausible, some distortion of facts and considerable ignorance". Logical positivists such as Ayer and the Vienna Circle criticized both Hegelian philosophy and its supporters, such as Bradley. Hegel's contemporary Schopenhauer was particularly critical and wrote of Hegel's philosophy as "a pseudo-philosophy paralyzing all mental powers, stifling all real thinking". Hegel was described by Schopenhauer as a "clumsy charlatan". Kierkegaard criticized Hegel's "absolute knowledge" unity. The physicist and philosopher Ludwig Boltzmann also criticized the obscure complexity of Hegel's works, referring to Hegel's writing as an "unclear thoughtless flow of words". In a similar vein, Robert Pippin notes that some view Hegel as having "the ugliest prose style in the history of the German language". Russell wrote in "A History of Western Philosophy" (1945) that Hegel was "the hardest to understand of all the great philosophers". Karl Popper quoted Schopenhauer as stating, "Should you ever intend to dull the wits of a young man and to incapacitate his brains for any kind of thought whatever, then you cannot do better than give Hegel to read...A guardian fearing that his ward might become too intelligent for his schemes might prevent this misfortune by innocently suggesting the reading of Hegel." Karl Popper wrote that "there is so much philosophical writing (especially in the Hegelian school) which may justly be criticised as meaningless verbiage". Popper also makes the claim in the second volume of "The Open Society and Its Enemies" (1945) that Hegel's system formed a thinly veiled justification for the absolute rule of Frederick William III and that Hegel's idea of the ultimate goal of history was to reach a state approximating that of 1830s Prussia. Popper further proposed that Hegel's philosophy served not only as an inspiration for communist and fascist totalitarian governments of the 20th century, whose dialectics allow for any belief to be construed as rational simply if it could be said to exist. Kaufmann and Shlomo Avineri have criticized Popper's theories about Hegel. Isaiah Berlin listed Hegel as one of the six architects of modern authoritarianism who undermined liberal democracy, along with Rousseau, Claude Adrien Helvétius, Fichte, Saint-Simon and Joseph de Maistre. Voegelin argued that Hegel should be understood not as a philosopher, but as a "sorcerer", i.e. as a mystic and hermetic thinker. This concept of Hegel as a hermetic thinker was elaborated by Glenn Alexander Magee, who argued that interpreting Hegel's body of work as an expression of mysticism and hermetic ideas leads to a more accurate understanding of Hegel.
https://en.wikipedia.org/wiki?curid=12598
Grid network A grid network is a computer network consisting of a number of computer systems connected in a grid topology. In a regular grid topology, each node in the network is connected with two neighbors along one or more dimensions. If the network is one-dimensional, and the chain of nodes is connected to form a circular loop, the resulting topology is known as a ring. Network systems such as FDDI use two counter-rotating token-passing rings to achieve high reliability and performance. In general, when an "n"-dimensional grid network is connected circularly in more than one dimension, the resulting network topology is a torus, and the network is called "toroidal". When the number of nodes along each dimension of a toroidal network is 2, the resulting network is called a hypercube. A parallel computing cluster or multi-core processor is often connected in regular interconnection network such as a de Bruijn graph, a hypercube graph, a hypertree network, a fat tree network, a torus, or cube-connected cycles. A grid network is not the same as a grid computer or a computational grid, although the nodes in a grid network are usually computers, and grid computing requires some kind of computer network or "universal coding" to interconnect the computers.
https://en.wikipedia.org/wiki?curid=12600
Governor-General of Australia The Governor-General of the Commonwealth of Australia is the representative of the Australian monarch, currently Queen Elizabeth II. As the Queen is concurrently the monarch of 15 other Commonwealth realms, and resides in the United Kingdom, she, on the advice of her Australian prime minister, appoints a governor-general to carry out constitutional duties within the Commonwealth of Australia. The governor-general has formal presidency over the Federal Executive Council and is commander-in-chief of the Australian Defence Force. The functions of the governor-general include appointing ministers, judges, and ambassadors; giving royal assent to legislation passed by parliament; issuing writs for election; and bestowing Australian honours. In general, the governor-general observes the conventions of the Westminster system and responsible government, maintaining a political neutrality, and has almost always acted only on the advice of the prime minister or other ministers or, in certain cases, parliament. The governor-general also has a ceremonial role: hosting events at either of the two official residencesGovernment House in the capital, Canberra, and Admiralty House in Sydneyand travelling throughout Australia to open conferences, attend services and commemorations, and generally provide encouragement to individuals and groups who are contributing to their communities. When travelling abroad, the governor-general is seen as the representative of Australia, and of the Queen of Australia. The governor-general is supported by a staff (of 80 in 2018) headed by the Official Secretary to the Governor-General of Australia. A governor-general is not appointed for a specific term, but is generally expected to serve for five years subject to a possible short extension. Since 1 July 2019, the governor-general has been General David Hurley. From Federation in 1901 until 1965, 11 out of the 15 governors-general were British aristocrats; they included four barons, three viscounts, three earls, and one royal duke. Since then, all but one of the governors-general have been Australian-born; the exception, Sir Ninian Stephen, arrived in Australia as a teenager. Only one governor-general, Dame Quentin Bryce (2008–2014), has been a woman. The selection of a Governor-General is a responsibility for the Prime Minister of Australia, who may consult privately with staff or colleagues, or with the monarch. The candidate is approached privately to confirm whether they are willing to accept the appointment. Having agreed to the appointment, the monarch then permits it to be publicly announced in advance, usually several months before the end of the current Governor-General's term. During these months, the person is referred to as the "Governor-General-designate". The actual appointment is made by the monarch. After receiving his or her commission, the Governor-General takes an Oath of Allegiance to the Australian monarch, an Oath of Office, undertaking to serve Australia's monarch "according to law, in the office of Governor-General of the Commonwealth of Australia", and issues a proclamation assuming office. The oaths are usually taken in a ceremony on the floor of the Senate and are administered by the Chief Justice of Australia in the presence of the Prime Minister of Australia, the Speaker of the Australian House of Representatives, and the President of the Australian Senate. In 1919, Prime Minister Billy Hughes sent a memorandum to the Colonial Office in which he requested "a real and effective voice in the selection of the King's representative". He further proposed that the Dominions be able to nominate their own candidates and that "the field of selection should not exclude citizens of the Dominion itself". The memorandum met with strong opposition within the Colonial Office and was dismissed by Lord Milner, the Colonial Secretary; no response was given. The following year, as Ronald Munro Ferguson's term was about to expire, Hughes cabled the Colonial Office and asked that the appointment be made in accordance with the memorandum. To mollify Hughes, Milner offered him a choice between three candidates. After consulting his cabinet he chose Henry Forster, 1st Baron Forster. In 1925, under Prime Minister Stanley Bruce, the same practice was followed for the appointment of Forster's successor Lord Stonehaven, with the Australian government publicly stating that his name "had been submitted, with others, to the Commonwealth ministry, who had selected him". The Prime Minister now advises the monarch to appoint their nominee. This has been the procedure since November 1930, when James Scullin's proposed appointment of Sir Isaac Isaacs was fiercely opposed by the British government. This was not because of any lack of regard for Isaacs personally, but because the British government considered that the choice of Governors-General was, since the 1926 Imperial Conference, a matter for the monarch's decision alone. (However, it became very clear in a conversation between Scullin and King George V's Private Secretary, Lord Stamfordham, on 11 November 1930, that this was merely the official reason for the objection, with the real reason being that an Australian, no matter how highly regarded personally, was not considered appropriate to be a Governor-General.) Scullin was equally insistent that the monarch must act on the relevant prime minister's direct advice (the practice until 1926 was that Dominion prime ministers advised the monarch indirectly, through the British government, which effectively had a veto over any proposal it did not agree with). Scullin cited the precedents of the Prime Minister of South Africa, J. B. M. Hertzog, who had recently insisted on his choice of the Earl of Clarendon as Governor-General of that country, and the selection of an Irishman as Governor-General of the Irish Free State. Both of these appointments had been agreed to despite British government objections. Despite these precedents, George V remained reluctant to accept Scullin's recommendation of Isaacs and asked him to consider Field Marshal Sir William Birdwood. However, Scullin stood firm, saying he would be prepared to fight a general election on the issue of whether an Australian should be prevented from becoming governor-general because he was Australian. On 29 November, the King agreed to Isaacs's appointment, but made it clear that he did so only because he felt he had no option. (Lord Stamfordham had complained that Scullin had "put a gun to the King's head".) This right not only to advise the monarch directly, but also to expect that advice to be accepted, was soon taken up by all the other Dominion prime ministers. This, among other things, led to the Statute of Westminster 1931 and to the formal separation of the Crowns of the Dominions. Now, the Queen of Australia is generally bound by constitutional convention to accept the advice of the Australian prime minister and state premiers about Australian and state constitutional matters, respectively. Governors-General have during their tenure the style "His/Her Excellency the Honourable" and their spouses have the style "His/Her Excellency". Since May 2013, the style used by a former Governor-General is "the Honourable"; it was at the same time retrospectively granted for life to all previous holders of the office. From the creation of the Order of Australia in 1975, the Governor-General was, "ex officio", Chancellor and Principal Companion of the Order, and therefore became entitled to the post-nominal AC. In 1976, the letters patent for the Order were amended to introduce the rank of Knight and Dame to the Order, and from that time the Governor-General became, ex officio, the Chancellor and Principal Knight of the Order. In 1986 the letters patent were amended again, and Governors-General appointed from that time were again, ex officio, entitled to the post-nominal AC (although if they already held a knighthood in the Order that superior rank was retained). Until 1989, all Governors-General were members of the Privy Council of the United Kingdom and thus held the additional style "the Right Honourable" for life. The same individuals were also usually either peers, knights, or both (the only Australian peer to be appointed as Governor-General was the Lord Casey; and Sir William McKell was knighted only in 1951, some years into his term, but he was entitled to the style "The Honourable" during his tenure as Premier of New South Wales, an office he held until almost immediately before his appointment). In 1989, Bill Hayden, a republican, declined appointment to the British Privy Council and any imperial honours. From that time until 2014, Governors-General did not receive automatic titles or honours, other than the post-nominal AC by virtue of being Chancellor and Principal Companion of the Order of Australia. Dame Quentin Bryce was the first Governor-General to have had no prior title or pre-nominal style. She was in office when, on 19 March 2014, the Queen, acting on the advice of Prime Minister Tony Abbott, amended the letters patent of the Order of Australia to provide, inter alia, that the Governor-General would be, ex officio, Principal Knight or Principal Dame of the Order. Until 2015, the honour continued after the retirement from office of the Governor-General. Formerly, the Governor-General automatically became a knight or dame (if he or she was not already one previously) upon being sworn in. All the Governors-General until 1965 were British-born, except for Australian-born Sir Isaac Isaacs (1931–1936) and Sir William McKell (1947–1953). There have been only Australian occupants since then, although Sir Ninian Stephen (1982–1989) had been born in Britain. Prince Henry, Duke of Gloucester, was a senior member of the Royal family. Dame Quentin Bryce (2008–2014) was the first woman to be appointed to the office. Sir Isaac Isaacs and Sir Zelman Cowen were Jewish; Bill Hayden was an avowed atheist during his term and he made an affirmation rather than swear an oath at the beginning of his commission; the remaining Governors-General have been at least nominally Christian. Various Governors-General had previously served as governors of an Australian state or colony: Lord Hopetoun (Victoria 1889–1895); Lord Tennyson (South Australia 1899–1902); Lord Gowrie (South Australia 1928–34; and New South Wales 1935–1936); Major General Michael Jeffery (Western Australia 1993–2000); Dame Quentin Bryce (Queensland 2003–2008); General David Hurley (New South Wales 2014 - 2019). Sir Ronald Munro Ferguson had been offered the governorship of South Australia in 1895 and of Victoria in 1910, but refused both appointments. Lord Northcote was Governor of Bombay. Lord Casey was Governor of Bengal in between his periods of service to the Australian Parliament. Former leading politicians and members of the judiciary have figured prominently. Lord Dudley was Lord Lieutenant of Ireland (1902–1905). Lord Stonehaven (as John Baird) was Minister for Transport in the Cabinets of Bonar Law and Stanley Baldwin; and after his return to Britain he became Chairman of the UK Conservative Party. Sir Isaac Isaacs was successively Commonwealth Attorney-General, a High Court judge, and Chief Justice. Sir William McKell was Premier of New South Wales. Lord Dunrossil (as William Morrison) was Speaker of the UK House of Commons. Lord De L'Isle was Secretary of State for Air in Winston Churchill's cabinet from 1951 to 1955. More recent Governors-General in this category include Lord Casey, Sir Paul Hasluck, Sir John Kerr, Sir Ninian Stephen, Bill Hayden and Sir William Deane. Of the eleven Australians appointed governor-general since 1965, Lord Casey, Sir Paul Hasluck and Bill Hayden were former federal parliamentarians; Sir John Kerr was the Chief Justice of the Supreme Court of New South Wales; Sir Ninian Stephen and Sir William Deane were appointed from the bench of the High Court; Sir Zelman Cowen was a vice-chancellor of the University of Queensland and constitutional lawyer; Peter Hollingworth was the Anglican Archbishop of Brisbane; and Major-General Michael Jeffery was a retired military officer and former Governor of Western Australia. Quentin Bryce's appointment was announced during her term as Governor of Queensland; she had previously been the Federal Sex Discrimination Commissioner. General David Hurley was a retired Chief of Defence Force and former Governor of New South Wales. Significant post-retirement activities of earlier Governors-General have included: Lord Tennyson was appointed Deputy Governor of the Isle of Wight; Sir Ronald Munro Ferguson (by now Lord Novar) became Secretary of State for Scotland; and Lord Gowrie became Chairman of the Marylebone Cricket Club (Lord Forster had also held this post, before his appointment as Governor-General). The constitution does not set a term of office, so a Governor-General may continue to hold office for any agreed length of time. In recent decades the typical term of office has been five years. Some early governors-general were appointed to terms of just one year (Lord Tennyson) or two years (Lord Forster; later extended). At the end of this initial term, a commission may be extended for a short time, usually to avoid conflict with an election or during political difficulties. The salary of the Governor-General was initially set by the constitution, which fixed an annual amount of A£10,000 until the parliament decided otherwise. The constitution also provides that the salary of the Governor-General cannot be "altered" during his or her term of office. Under the "Governor-General Act 1974", each new commission has resulted in a pay increase. Today, the law ensures the salary is higher than that for the Chief Justice of the High Court, over a five-year period. The annual salary during Michael Jeffery's term was $365,000. Quentin Bryce's salary was $394,000. The current salary is $425,000 and there is a generous pension. Until 2001, Governors-General did not pay income tax on their salary; this was changed after the Queen agreed to pay tax. Three Governors-General have resigned their commission. The first Governor-General, Lord Hopetoun, asked to be recalled to Britain in 1903 over a dispute about funding for the post. Sir John Kerr resigned in 1977, with his official reason being his decision to accept the position of Australian Ambassador to UNESCO in Paris, a post which ultimately he did not take up, but the resignation also being motivated by the 1975 constitutional controversy. In 2003, Archbishop Peter Hollingworth voluntarily stood aside while controversial allegations against him were managed, and the letters patent of the office were amended to take account of this circumstance. He later "stepped down over the church's handling" of "allegations" of sexual abuse of boys, for which he apologised before the Royal Commission into Institutional Responses to Child Sexual Abuse in 2016. In 1961, Lord Dunrossil became the first and, to date, only Governor-General to die while holding office. A Governor-General may be recalled or dismissed by the monarch before their term is complete. By convention, this may only be upon advice from the prime minister, who retains responsibility for selecting an immediate replacement or letting the vacancy provisions take effect. No Australian Governor-General has ever been dismissed, and it is unclear how quickly the monarch would act on such advice. The constitutional crisis of 1975 raised the possibility of the prime minister and the Governor-General attempting to dismiss each other at the same time. According to William McMahon, Harold Holt considered having Lord Casey dismissed from the governor-generalship, and went as far as to have the necessary documents drawn up. Casey had twice called McMahon into Yarralumla to give him a "dressing down" over his poor relationship with Deputy Prime Minister John McEwen, which he believed was affecting the government. Holt believed that this was an improper use of his authority, but no further action was taken. A vacancy occurs on the resignation, death, or incapacity of the Governor-General. A temporary vacancy occurs when the Governor-General is overseas on official business representing Australia. A temporary vacancy also occurred in 2003 when Peter Hollingworth stood aside. Section 4 of the constitution allows the Queen to appoint an administrator to carry out the role of Governor-General when there is a vacancy. By convention, the longest-serving state governor holds a dormant commission, allowing an assumption of office to commence whenever a vacancy occurs. In 1975, Labor Prime Minister Gough Whitlam advised the Queen that Sir Colin Hannah, then Governor of Queensland, should have his dormant commission revoked for having made public political statements. The Constitution of Australia, section 2, provides:A Governor-General appointed by the Queen shall be Her Majesty's representative in the Commonwealth, and shall have and may exercise in the Commonwealth during the Queen's pleasure, but subject to this Constitution, such powers and functions of the Queen as Her Majesty may be pleased to assign to him.Such further powers are currently set out in letters patent of 2008 from Queen Elizabeth II; these contain no substantive powers, but provide for the case of a Governor-General's absence or incapacity. The constitution also provides that the Governor-General is the monarch's "representative" in exercising the executive power of the Commonwealth (section 61) and as commander-in-chief of the armed forces (section 68). Australian Solicitor-General Maurice Byers stated in 1974: "The constitutional prescription is that executive power is exercisable by the Governor-General although vested in the Queen. What is exercisable is original executive power: that is, the very thing vested in the Queen by section 61. And it is exercisable by the Queen's representative, not her delegate or agent." The 1988 Constitutional Commission report explained: "the Governor-General is in no sense a delegate of the Queen. The independence of the office is highlighted by changes which have been made in recent years to the Royal Instruments relating to it." The changes occurred in 1984 when Queen Victoria's letters patent and instructions were revoked and replaced with new letters patent, on Prime Minister Bob Hawke's advice, who stated that this would clarify the Governor-General's position under the constitution. This remains the case even when the sovereign is in the country: Solicitor-General Kenneth Bailey, prior to the first tour of Australia by its reigning monarch in 1954, explained the position by saying: the Constitution expressly vests in the Governor-General the power or duty to perform a number of the Crown's functions in the Legislature and the Executive Government of the Commonwealth... The executive power of the Commonwealth, by section 61 of the Constitution, is declared to be vested in the Queen. It is also, in the same section, declared to be "exercisable" by the Governor-General as the Queen's representative. In the face of this provision, I feel it is difficult to contend that the Queen, even though present in Australia, may exercise in person functions of executive government which are specifically assigned by the constitution to the Governor-General." As early as 1901, the authoritative commentary by Quick and Garran had noted that the Governor-General of Australia was distinguished from other Empire Governors-General by the fact that "[t]he principal and most important of his powers and functions, legislative as well as executive, are expressly conferred on him by the terms of the Constitution itself ... not by Royal authority, but by statutory authority". This view was also held by Senior Judge of the Supreme Court of Tasmania Andrew Inglis Clark, who, with W. Harrison Moore (a contributor to the first draft of the constitution put before the 1897 Adelaide Convention and professor of law at the University of Melbourne), postulated that the letters patent and the royal instructions issued by Queen Victoria were unnecessary "or even of doubtful legality". The monarch chose not to intervene during the 1975 Australian constitutional crisis, in which Governor-General Sir John Kerr dismissed the Labor government of Gough Whitlam, on the basis that such a decision is a matter "clearly placed within the jurisdiction of the Governor-General". Through her Private Secretary, she wrote that she "has no part in the decisions which the Governor-General must take in accordance with the Constitution". In an address to the Sydney Institute, January 2007, in connection with that event, Sir David Smith, a retired Official Secretary to the Governor-General of Australia, described the constitution as conferring the powers and functions of Australia's head of state on the Governor-General in "his own right". He stated that the Governor-General was more than a representative of the sovereign, explaining: "under section 2 of the Constitution the Governor-General is the Queen's representative and exercises certain royal prerogative powers and functions; under section 61 of the Constitution the Governor-General is the holder of a quite separate and independent office created, not by the Crown, but by the Constitution, and empowered to exercise, in his own right as Governor-General... all the powers and functions of Australia's head of state." The constitution describes the parliament of the commonwealth as consisting of the Queen, the Senate and the House of Representatives. Section 5 states that "the Governor-General may appoint such times for holding the sessions of the Parliament [...] prorogue the Parliament [and] dissolve the House of Representatives." These provisions make it clear that the Queen's role in the parliament is in name only and the actual responsibility belongs to the Governor-General. Such decisions are usually taken on the advice of the prime minister, although that is not stated in the constitution. The Governor-General has a ceremonial role in swearing in and accepting the resignations of Members of Parliament. They appoint a deputy, to whom members make an oath of allegiance before they take their seats. On the day parliament opens, the Governor-General makes a speech, entirely written by the government, explaining the government's proposed legislative program. The most important power is found in section 58: "When a proposed law passed by both Houses of Parliament is presented to the Governor-General for the Queen's assent, he shall declare ... that he assents in the Queen's name." The royal assent brings such laws into effect, as legislation, from the date of signing. Sections 58 to 60 allow the Governor-General to withhold assent, suggest changes, refer to the Queen or proclaim that the Queen has annulled the legislation. A number of Governors-General have reserved Royal Assent for particular legislation for the Queen. Such assent has usually been given during a scheduled visit to Australia by the Queen. On other occasions Royal Assent has been given elsewhere. Examples of this have been the Flags Act (1953), the Royal Styles and Titles Acts (1953 and 1973), and the Australia Act (1986). At the start of Chapter 2 on executive government, the constitution says "The executive power of the Commonwealth is vested in the Queen and is exercisable by the Governor-General as the Queen's representative". The Governor-General presides over a Federal Executive Council. By convention, the prime minister is appointed to this council and advises as to which parliamentarians shall become ministers and parliamentary secretaries. In the constitution, the words "Governor-General-in-council" mean the Governor-General acting with the advice of the Council. Powers exercised in council, which are not reserve powers, include: All such actions are taken on the advice of ministers. Under section 68 of the constitution, "the command in chief of the naval and military forces of the Commonwealth is vested in the Governor‑General". In practice, the associated powers over the Australian Defence Force are only exercised on the advice of the Prime Minister or Minister for Defence, on behalf of cabinet. The actual powers of the Governor-General as commander-in-chief are not defined in the constitution, but rather in the "Defence Act 1903" and other legislation. They include appointing the Chief of the Defence Force and authorising the deployment of troops. There is some ambiguity with regard to the role of the Governor-General in declarations of war. In 1941 and 1942, the Curtin Government advised the Governor-General to declare war on several Axis powers, but then had King George VI make identical proclamations on Australia's behalf. No formal declarations of war have been made since World War II, although in 1973 the Whitlam Government advised the Governor-General to proclaim the end of Australia's involvement in Vietnam, despite the lack of an initiating proclamation. The powers of command-in-chief are vested in the Governor-General rather than the "Governor-General in Council", meaning there is an element of personal discretion in their exercise. For instance, in 1970 Governor-General Paul Hasluck refused Prime Minister John Gorton's request to authorise a Pacific Islands Regiment peacekeeping mission in the Territory of Papua and New Guinea, on the grounds that cabinet had not been consulted. Gorton agreed to put the matter to his ministers, and a cabinet meeting agreed that troops should only be called out if requested by the territory's administrator; this did not occur. Defence Minister Malcolm Fraser, who opposed the call out, was responsible for informing Hasluck of the prime minister's lack of consultation. The incident contributed to Fraser's resignation from cabinet in 1971 and Gorton's subsequent loss of the prime ministership. In the United Kingdom, the reserve powers of the monarch (which are typically referred to as the "royal prerogative") are not explicitly stated in constitutional enactments, and are the province of convention and common law. In Australia, however, the powers are explicitly given to the Governor-General in the constitution; it is their use that is the subject of convention. The reserve powers are, according to the Constitution of Australia: Those powers are generally and routinely exercised on ministerial advice, but the Governor-General retains the ability to act independently in certain circumstances, as governed by convention. It is generally held that the Governor-General may use powers without ministerial advice in the following situations: The use of the reserve powers may arise in the following circumstances: The above is not an exhaustive list, and new situations may arise. The most notable use of the reserve powers occurred in November 1975, in the course of the 1975 Australian constitutional crisis. On this occasion the Governor-General, Sir John Kerr, dismissed the government of Gough Whitlam when the Senate withheld Supply to the government, even though Whitlam retained the confidence of the House of Representatives. Kerr determined that he had both the right and the duty to dismiss the government and commission a new government that would recommend a dissolution of the Parliament. Events surrounding the dismissal remain extremely controversial. On 18 March 2020, a human biosecurity emergency was declared in Australia owing to the risks to human health posed by the COVID-19 pandemic in Australia, after the National Security Committee met the previous day. The "Biosecurity Act 2015" specifies that the Governor-General may declare such an emergency exists if the Health Minister (currently Greg Hunt) is satisfied that "a listed human disease is posing a severe and immediate threat, or is causing harm, to human health on a nationally significant scale". This gives the Minister sweeping powers, including imposing restrictions or preventing the movement of people and goods between specified places, and evacuations. The "Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) Declaration 2020" was declared by the Governor-General, currently David Hurley, under Section 475 of the Act. In addition to the formal constitutional role, the Governor-General has a representative and ceremonial role, though the extent and nature of that role has depended on the expectations of the time, the individual in office at the time, the wishes of the incumbent government, and the individual's reputation in the wider community. Governors-General generally become patrons of various charitable institutions, present honours and awards, host functions for various groups of people including ambassadors to and from other countries, and travel widely throughout Australia. Sir William Deane (Governor-General 1996–2001) described one of his functions as being "Chief Mourner" at prominent funerals. In "Commentaries on the Constitution of the Commonwealth of Australia", Garran noted that, since the Australian executive is national in nature (being dependent on the nationally elected House of Representatives, rather than the Senate), "the Governor-General, as the official head of the Executive, does not in the smallest degree represent any federal element; if he represents anything he is the image and embodiment of national unity and the outward and visible representation of the Imperial relationship of the Commonwealth". That role can become controversial, however, if the Governor-General becomes unpopular with sections of the community. The public role adopted by Sir John Kerr was curtailed considerably after the constitutional crisis of 1975; Sir William Deane's public statements on political issues produced some hostility towards him; and some charities disassociated themselves from Peter Hollingworth after the issue of his management of sex abuse cases during his time as Anglican Archbishop of Brisbane became a matter of controversy. At one time, Governors-General wore the traditional court uniform, consisting of a dark navy wool double-breasted coatee with silver oak leaf and fern embroidery on the collar and cuffs trimmed with silver buttons embossed with the Royal Arms and with bullion edged epaulettes on the shoulders, dark navy trousers with a wide band of silver oak-leaf braid down the outside seam, silver sword belt with ceremonial sword, bicorne cocked hat with plume of ostrich feathers, black patent leather Wellington boots with spurs, etc., that is worn on ceremonial occasions. There is also a tropical version made of white tropical wool cut in a typical military fashion worn with a plumed helmet. However, that custom fell into disuse during the tenure of Sir Paul Hasluck. The Governor-General now wears an ordinary lounge suit if a man or day dress if a woman. The governor-general makes state visits on behalf of Australia, during which time an administrator of the government is appointed. The right of governors-general to make state visits was confirmed at the 1926 Imperial Conference, as it was deemed unfeasible for the sovereign to pay state visits on behalf of countries other than the United Kingdom. However, an Australian governor-general did not exercise that right until 1971, when Paul Hasluck visited New Zealand. Hasluck's successor John Kerr made state visits to eight countries, but Kerr's successor Zelman Cowen made only a single state visit – to Papua New Guinea – as he wished to concentrate on travelling within Australia. All subsequent governors-general have travelled widely while in office and made multiple state visits. Occasionally governors-general have made extended tours visiting multiple countries, notably in 2009 when Quentin Bryce visited nine African countries in 19 days. The office of Governor-General as an agency of the Commonwealth is regulated by the Governor-General Act 1974. The act provides the Governor-General with a salary (fixed in 2014 at $425,000) and, after leaving office, a lifetime allowance fixed at two-thirds of the salary of the Chief Justice of the High Court. There is also provision for a surviving spouse or partner. The Governor-General appoints an Official Secretary, who in turn appoints other staff. By convention, the Governor-General and any family occupy an official residence in Canberra, Government House (commonly referred to as Yarralumla). The Governor-General travels in a Rolls-Royce Phantom VI limousine for ceremonial occasions, such as the State Opening of Parliament. However, Governors-General more commonly use Australian-built luxury cars when on official business. The official cars of the Governor-General fly the Flag of the Governor-General of Australia and display St. Edward's Crown instead of number plates. A similar arrangement is used for the governors of the six states. When the Queen is in Australia, the Queen's Personal Australian Flag is flown on the car in which she is travelling. During the Queen's 2011 visit to Australia, she and the Duke of Edinburgh were driven in a Range Rover Vogue. The office of "Governor-General" was previously used in Australia in the mid-19th century. Sir Charles FitzRoy (Governor of New South Wales from 1846–1855) and Sir William Denison (Governor of New South Wales from 1855–1861) also carried the additional title of Governor-General because their jurisdiction extended to other colonies in Australia. The office of Governor-General for the Commonwealth of Australia was conceived during the debates and conventions leading up to federation. The first Governor-General, the Earl of Hopetoun, was a previous Governor of Victoria. He was appointed in July 1900, returning to Australia shortly before the inauguration of the Commonwealth of Australia on 1 January 1901. After the initial confusion of the Hopetoun Blunder, he appointed the first Prime Minister of Australia, Edmund Barton, to a caretaker government, with the inaugural 1901 federal election not occurring until March. Early Governors-General were British and were appointed by the Queen or King on the recommendation of the Colonial Office. The Australian Government was merely asked, as a matter of courtesy, whether they approved of the choice or not. Governors-General were expected to exercise a supervisory role over the Australian Government in the manner of a colonial governor. In a very real sense, they represented the British Government. They had the right to "reserve" legislation passed by the Parliament of Australia: in effect, to ask the Colonial Office in London for an opinion before giving the royal assent. They exercised this power several times. The Queen (that is, the UK monarch acting upon advice of the British Government) could also disallow any Australian legislation up to a year after the Governor-General had given it the assent; although this power has never been used. These powers remain in the Constitution, but today are regarded as dead letters. The early Governors-General frequently sought advice on the exercise of their powers from two judges of the High Court of Australia, Sir Samuel Griffith and Sir Edmund Barton. That practice has continued from time to time. During the 1920s, the importance of the position declined. As a result of decisions made at the 1926 Imperial Conference, the Governor-General ceased to represent the British Government diplomatically, and the British right of supervision over Australian affairs was abolished. As the Balfour Declaration of 1926, later implemented as the Statute of Westminster 1931, put it:It is desirable formally to place on record a definition of the position held by the Governor-General as His Majesty's representative in the Dominions. That position, though now generally well recognised, undoubtedly represents a development from an earlier stage when the Governor-General was appointed solely on the advice of His Majesty's Ministers in London and acted also as their representative. In our opinion it is an essential consequence of the equality of status existing among the members of the British Commonwealth of Nations that the Governor-General of a Dominion is the representative of the Crown, holding in all essential respects the same position in relation to the administration of public affairs in the Dominion as is held by His Majesty the King in Great Britain, and that he is not the representative or agent of His Majesty's Government in Great Britain or of any Department of that Government. However, it remained unclear just whose prerogative it now became to decide who new Governors-General would be. In 1930, King George V and the Australian Prime Minister James Scullin discussed the appointment of a new Governor-General to replace Lord Stonehaven, whose term was coming to an end. The King maintained that it was now his sole prerogative to choose a Governor-General, and he wanted Field-Marshal Sir William Birdwood for the Australian post. Scullin recommended the Australian jurist Sir Isaac Isaacs, and he insisted that George V act on the advice of his Australian prime minister in this matter. Scullin was partially influenced by the precedent set by the Government of the Irish Free State, which always insisted upon having an Irishman as the Governor-General of the Irish Free State. The King approved Scullin's choice, albeit with some displeasure. The usual wording of official announcements of this nature read "The King has been pleased to appoint ...", but on this occasion the announcement said merely "The King has appointed ...", and his Private Secretary (Lord Stamfordham) asked the Australian Solicitor-General, Sir Robert Garran, to make sure that Scullin was aware of the exact wording. The opposition Nationalist Party of Australia denounced the appointment as "practically republican", but Scullin had set a precedent. The convention gradually became established throughout the Commonwealth that the Governor-General is a citizen of the country concerned, and is appointed on the advice of the government of that country. In 1931, the transformation was concluded with the appointment of the first Australian Governor-General, Isaacs, and the first British Representative in Australia, Ernest Crutchley. 1935 saw the appointment of the first British High Commissioner to Australia, Geoffrey Whiskard (in office 1936–1941). After Scullin's defeat in 1931, non-Labor governments continued to recommend British people for appointment as Governor-General, but such appointments remained solely a matter between the Australian government and the monarch. In 1947, Labor appointed a second Australian Governor-General, William McKell, who was in office as the Labor Premier of New South Wales. The then Leader of the Opposition, Robert Menzies, called McKell's appointment "shocking and humiliating". In 1965 the Menzies conservative government appointed an Australian, Lord Casey, and thereafter only Australians have held the position. Suggestions during the early 1980s that the Prince of Wales might become the Governor-General came to nothing due to the prospective constitutional difficulty that might ensue if Prince Charles became King. In 2007 media outlets reported that Prince William might become Governor-General of Australia. Both the Prime Minister, John Howard, and Clarence House repudiated the suggestion. The Governor-General is generally invited to become Patron of various charitable and service organisations. Historically the Governor-General has also served as Chief Scout of Australia. The Chief Scout is nominated by the Scouting Association's National Executive Committee and is invited by the President of the Scout Association to accept the appointment. Bill Hayden declined the office on the grounds of his atheism, which was incompatible with the Scout Promise. He did however serve as the Association's Patron during his term of office. Spouses of Governors-General have no official duties but carry out the role of a Vice-Regal consort. They are entitled to the courtesy style "Her Excellency" or "His Excellency" during the office-holder's term of office. Most spouses of Governors-General have been content to be quietly supportive. Some, however, have been notable in their own right, such as Dame Alexandra Hasluck, Lady Casey and Michael Bryce. there are six living former Governors-General of Australia. The most recently deceased governor-general, Sir Ninian Stephen (1982–1989), died on 29 October 2017.
https://en.wikipedia.org/wiki?curid=12601
Glasnost In the Russian language the word glasnost (; , ) has several general and specific meanings. It has been used in Russian to mean "openness and transparency" since at least the end of the eighteenth century. In the Russian Empire of the late-19th century, the term was particularly associated with reforms of the judicial system, among these were reforms permitting attendance of the press and the public at trials whose verdicts were now to be read aloud. In the mid-1980s, it was popularised by Mikhail Gorbachev as a political slogan for increased government transparency in the Soviet Union. Human rights activist Lyudmila Alexeyeva argues that the word "glasnost" has been in the Russian language for several hundred years as a common term: "It was in the dictionaries and lawbooks as long as there had been dictionaries and lawbooks. It was an ordinary, hardworking, non-descript word that was used to refer to a process, any process of justice or governance, being conducted in the open." In the mid-1960s it acquired a revived topical importance in discourse concerning the cold-war era internal policy of the Soviet Union. On 5 December 1965 the Glasnost rally took place in Moscow, considered to be a key event in the emergence of the Soviet civil rights movement. Protesters on Pushkin Square led by Alexander Yesenin-Volpin demanded access to the closed trial of Yuly Daniel and Andrei Sinyavsky. The protestors made specific requests for "glasnost", herein referring to the specific admission of the public, independent observers and foreign journalists, to the trial that had been legislated in the then newly issued Code of Criminal Procedure. With a few specified exceptions, Article 111 of the Code stated that judicial hearings in the USSR should be held in public. Such protests against closed trials continued throughout the post-Stalin era. Andrei Sakharov, for example, did not travel to Oslo to receive his Nobel Peace Prize due to his public protest outside a Vilnius court building demanding access to the 1976 trial of Sergei Kovalev, an editor of the "Chronicle of Current Events" and prominent rights activist. In 1986 Mikhail Gorbachev and his advisers adopted "glasnost" as a political slogan, together with the obscure term "perestroika" in order to invoke the term's historical and contemporaneous resonance. Glasnost was taken to mean increased openness and transparency in government institutions and activities in the Soviet Union (USSR). "Glasnost" reflected a commitment of the Gorbachev administration to allowing Soviet citizens to discuss publicly the problems of their system and potential solutions. Gorbachev encouraged popular scrutiny and criticism of leaders, as well as a certain level of exposure by the mass media. Some critics, especially among legal reformers and dissidents, regarded the Soviet authorities' new slogans as vague and limited alternatives to more basic liberties. Alexei Simonov, president of the Glasnost Defence Foundation, makes a critical definition of the term in suggesting it was "a tortoise crawling towards Freedom of Speech". Between 1986 and 1991, during an era of reforms in the USSR, glasnost was frequently linked with other generalised concepts such as perestroika (literally: restructuring or regrouping) and demokratizatsiya (democratisation). Gorbachev often appealed to glasnost when promoting policies aimed at reducing corruption at the top of the Communist Party and the Soviet government, and moderating the abuse of administrative power in the Central Committee.The ambiguity of "glasnost" defines the distinctive five-year period (1986–1991) at the end of the USSR's existence. There was decreasing pre-publication and pre-broadcast censorship and greater freedom of information. The "Era of Glasnost" saw greater contact between Soviet citizens and the Western world, particularly the United States: restrictions on travel were loosened for many soviet citizens which further eased pressures on international exchange between the Soviet Union and the West. Gorbachev's interpretation of "glasnost" can best be summarised in English as "openness". While associated with freedom of speech, the main goal of this policy was to make the country's management transparent, and circumvent the holding of near-complete control of the economy and bureaucracy of the Soviet Union by a concentrated body of officials and bureaucratic personnel. During Glasnost, Soviet history under Stalin was re-examined; censored literature in the libraries was made more widely available; and there was a greater freedom of speech for citizens and openness in the media. It was in the late 1980's when most people in the Soviet Union began to learn about the atrocities of Stalin, and learned about previously suppressed events such as the first manned moon landing by the United States, the marches and speeches of the U.S. African American civil rights movement, the full information of the Decolonisation of Africa. Information about the supposedly higher quality of consumer goods and quality of life in the United States and Western Europe began to be transmitted to the Soviet population, along with western popular culture. The outright prohibition of censorship was enshrined in Article 29 of the new 1993 Constitution of the Russian Federation. This however has been the subject of ongoing controversy in contemporary Russia owing to heightened governmental interventions restricting access to information to Russian citizens and pressure by government-operated media outlets to not publicise or discuss certain events or subjects in recent years. Monitoring of the infringement of media rights in the years from 2004 to 2013 found that instances of censorship were the most commonly reported type of violation. There were also periodic concerns about the extent of glasnost in court proceedings, as restrictions were placed on access to certain cases for the media and for the public.
https://en.wikipedia.org/wiki?curid=12607
Geodesy Geodesy () is the Earth science of accurately measuring and understanding Earth's geometric shape, orientation in space and gravitational field. The field also incorporates studies of how these properties change over time and equivalent measurements for other planets (known as planetary geodesy). Geodynamical phenomena include crustal motion, tides and polar motion, which can be studied by designing global and national control networks, applying space and terrestrial techniques and relying on datums and coordinate systems. The word geodesy comes from the Ancient Greek word "geodaisia" (literally, "division of Earth"). It is primarily concerned with positioning within the temporally varying gravitational field. Geodesy in the German-speaking world is divided into "higher geodesy" (""Erdmessung"" or ""höhere Geodäsie""), which is concerned with measuring Earth on the global scale, and "practical geodesy" or "engineering geodesy" (""Ingenieurgeodäsie""), which is concerned with measuring specific parts or regions of Earth, and which includes surveying. Such geodetic operations are also applied to other astronomical bodies in the solar system. It is also the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field. To a large extent, the shape of Earth is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates and of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography) and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy. The geoid is essentially the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of sea water, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. The geoid, unlike the reference ellipsoid, is irregular and too complicated to serve as the computational surface on which to solve geometrical problems like point positioning. The geometrical separation between the geoid and the reference ellipsoid is called the geoidal undulation. It varies globally between ±110 m, when referred to the GRS 80 ellipsoid. A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) "a" and flattening "f". The quantity "f" = , where "b" is the semi-minor axis (polar radius), is a purely geometrical one. The mechanical ellipticity of Earth (dynamical flattening, symbol "J"2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with the geometrical flattening is indirect. The relationship depends on the internal density distribution, or, in simplest terms, the degree of central concentration of mass. The 1980 Geodetic Reference System (GRS 80) posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. This system was adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG). It is essentially the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. The numerous systems that countries have used to create maps and charts are becoming obsolete as countries increasingly move to global, geocentric reference systems using the GRS 80 reference ellipsoid. The geoid is "realizable", meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a real surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, therefore it is an abstract surface. The third primary surface of geodetic interest—the topographic surface of Earth—is a realizable surface. The locations of points in three-dimensional space are most conveniently described by three cartesian or rectangular coordinates, "X", "Y" and "Z". Since the advent of satellite positioning, such coordinate systems are typically geocentric: the "Z"-axis is aligned with Earth's (conventional or instantaneous) rotation axis. Prior to the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but their origins differed from the geocenter by hundreds of meters, due to regional deviations in the direction of the plumbline (vertical). These regional geodetic data, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927) have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas. It is only because GPS satellites orbit about the geocenter, that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space are themselves computed in such a system. Geocentric coordinate systems used in geodesy can be divided naturally into two classes: The coordinate transformation between these two systems is described to good approximation by (apparent) sidereal time, which takes into account variations in Earth's axial rotation (length-of-day variations). A more accurate description also takes polar motion into account, a phenomenon closely monitored by geodesists. In surveying and mapping, important fields of application of geodesy, two general types of coordinate systems are used in the plane: Rectangular coordinates in the plane can be used intuitively with respect to one's current location, in which case the "x"-axis will point to the local north. More formally, such coordinates can be obtained from three-dimensional coordinates using the artifice of a map projection. It is "not" possible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen—called a conformal projection—preserves angles and length ratios, so that small circles are mapped as small circles and small squares as squares. An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates "x" and "y". In this case, the north direction used for reference is the "map" north, not the "local" north. The difference between the two is called meridian convergence. It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be "α" and "s" respectively, then we have The reverse transformation is given by: In geodesy, point or terrain "heights" are "above sea level", an irregular, physically defined surface. Heights come in the following variants: Each has its advantages and disadvantages. Both orthometric and normal heights are heights in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. Orthometric and normal heights differ in the precise way in which mean sea level is conceptually continued under the continental masses. The reference surface for orthometric heights is the geoid, an equipotential surface approximating mean sea level. "None" of these heights is in any way related to geodetic or ellipsoidial heights, which express the height of a point above the reference ellipsoid. Satellite positioning receivers typically provide ellipsoidal heights, unless they are fitted with special conversion software based on a model of the geoid. Because geodetic point coordinates (and heights) are always obtained in a system that has been constructed itself using real observations, geodesists introduce the concept of a "geodetic datum": a physical realization of a coordinate system used for describing point locations. The realization is the result of "choosing" conventional coordinate values for one or more datum points. In the case of height data, it suffices to choose "one" datum point: the reference benchmark, typically a tide gauge at the shore. Thus we have vertical data like the NAP (Normaal Amsterdams Peil), the North American Vertical Datum 1988 (NAVD 88), the Kronstadt datum, the Trieste datum, and so on. In case of plane or spatial coordinates, we typically need several datum points. A regional, ellipsoidal datum like ED 50 can be fixed by prescribing the undulation of the geoid and the deflection of the vertical in "one" datum point, in this case the Helmert Tower in Potsdam. However, an overdetermined ensemble of datum points can also be used. Changing the coordinates of a point set referring to one datum, so to make them refer to another datum, is called a "datum transformation". In the case of vertical data, this consists of simply adding a constant shift to all height values. In the case of plane or spatial coordinates, datum transformation takes the form of a similarity or "Helmert transformation", consisting of a rotation and scaling operation in addition to a simple translation. In the plane, a Helmert transformation has four parameters; in space, seven. In the abstract, a coordinate system as used in mathematics and geodesy is called a "coordinate system" in ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system". When these coordinates are realized by choosing datum points and fixing a geodetic datum, ISO says "coordinate reference system", while IERS says "reference frame". The ISO term for a datum transformation again is a "coordinate transformation". Point positioning is the determination of the coordinates of a point on land, at sea, or in space with respect to a coordinate system. Point position is solved by computation from measurements linking the known positions of terrestrial or extraterrestrial points with the unknown terrestrial position. This may involve transformations between or among astronomical and terrestrial coordinate systems. The known points used for point positioning can be triangulation points of a higher-order network or GPS satellites. Traditionally, a hierarchy of networks has been built to allow point positioning within a country. Highest in the hierarchy were triangulation networks. These were densified into networks of traverses (polygons), into which local mapping surveying measurements, usually with measuring tape, corner prism, and the familiar red and white poles, are tied. Nowadays all but special measurements (e.g., underground or high-precision engineering measurements) are performed with GPS. The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors are then adjusted in traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is used to define a single global, geocentric reference frame which serves as the "zero order" global reference to which national measurements are attached. For surveying mappings, frequently Real Time Kinematic GPS is employed, tying in the unknown points with known terrestrial points close by in real time. One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. In every country, thousands of such known points exist and are normally documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements. In geometric geodesy, two standard problems exist—the first (direct or forward) and the second (inverse or reverse). In plane geometry (valid for small areas on Earth's surface), the solutions to both problems reduce to simple trigonometry. On a sphere, however, the solution is significantly more complex, because in the inverse problem the azimuths will differ between the two end points of the connecting great circle, arc. On the ellipsoid of revolution, geodesics may be written in terms of elliptic integrals, which are usually evaluated in terms of a series expansion—see, for example, Vincenty's formulae. In the general case, the solution is called the geodesic for the surface considered. The differential equations for the geodesic can be solved numerically. Here we define some basic observational concepts, like angles and coordinates, defined in geodesy (and astronomy as well), mostly from the viewpoint of the local observer. The level is used for determining height differences and height reference systems, commonly referred to mean sea level. The traditional spirit level produces these practically most useful heights above sea level directly; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid knowledge accumulates, one may expect the use of GPS heighting to spread. The theodolite is used to measure horizontal and vertical angles to target points. These angles are referred to the local vertical. The tacheometer additionally determines, electronically or electro-optically, the distance to target, and is highly automated to even robotic in its operations. The method of free station position is widely used. For local detail surveys, tacheometers are commonly employed although the old-fashioned rectangular technique using angle prism and steel tape is still an inexpensive alternative. Real-time kinematic (RTK) GPS techniques are used as well. Data collected are tagged and recorded digitally for entry into a Geographic Information System (GIS) database. Geodetic GPS receivers produce directly three-dimensional coordinates in a geocentric coordinate frame. Such a frame is, e.g., WGS84, or the frames that are regularly produced and published by the International Earth Rotation and Reference Systems Service (IERS). GPS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys. For planet-wide geodetic surveys, previously impossible, we can still mention satellite laser ranging (SLR) and lunar laser ranging (LLR) and very-long-baseline interferometry (VLBI) techniques. All these techniques also serve to monitor irregularities in Earth's rotation as well as plate tectonic motions. Gravity is measured using gravimeters, of which there are two kinds. First, "absolute gravimeters" are based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish the vertical geospatial control and can be used in the field. Second, "relative gravimeters" are spring-based and are more common. They are used in gravity surveys over large areas for establishing the figure of the geoid over these areas. The most accurate relative gravimeters are called "superconducting" gravimeters, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide for studying Earth's tides, rotation, interior, and ocean and atmospheric loading, as well as for verifying the Newtonian constant of gravitation. In the future, gravity and altitude will be measured by relativistic time dilation measured by strontium optical clocks. Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are "angles", not metric measures, and describe the "direction" of the local normal to the reference ellipsoid of revolution. This is "approximately" the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works fairly well provided an ellipsoidal model of the figure of Earth is used. One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator as is the nautical mile. A metre was originally defined as the 10-millionth part of the length from equator to North Pole along the meridian through Paris (the target was not quite reached in actual implementation, so that is off by 200 ppm in the current definitions). This means that one kilometre is roughly equal to (1/40,000) * 360 * 60 meridional minutes of arc, which equals 0.54 nautical mile, though this is not exact because the two units are defined on different bases (the international nautical mile is defined as exactly 1,852 m, corresponding to a rounding of 1,000/0.54 m to four digits). In geodesy, temporal change can be studied by a variety of techniques. Points on Earth's surface change their location due to a variety of mechanisms: The science of studying deformations and motions of Earth's crust and its solidity as a whole is called geodynamics. Often, study of Earth's irregular rotation is also included in its definition. Techniques for studying geodynamic phenomena on the global scale include:
https://en.wikipedia.org/wiki?curid=12608
Eurogame A Eurogame, also called a German-style board game, German game, or Euro-style game, is a class of tabletop games that generally has indirect player interaction and abstract physical components. Eurogames are sometimes contrasted with American-style board games, which generally involve more luck, conflict, and drama. They are usually less abstract than chess or Go, but more abstract than wargames. Likewise, they generally require more thought and planning than party games such as "Pictionary" or "Trivial Pursuit". Contemporary Eurogames, such as "Acquire", appeared in the 1960s. The 3M series of which "Acquire" formed a part became popular in Germany, and became a template for a new form of game, one in which direct conflict or warfare did not play a role, due in part to aversion in postwar Germany to products which glorified conflict. The genre developed as a more concentrated design movement in the late 1970s and early 1980s in Germany, and Germany purchased more board games "per capita" than any other country. The phenomenon has spread to other European countries such as France, the Netherlands, and Sweden. While many games are published and played in other markets such as the United States and the United Kingdom, they occupy a niche status there. "The Settlers of Catan", first published in 1995, paved the way for the genre outside Europe. Though neither the first Eurogame nor the first such game to find an audience outside Germany, it became much more popular than any of its predecessors. It quickly sold millions of copies in Germany, and in the process brought money and attention to the genre as a whole. Other games in the genre to achieve widespread popularity include "Carcassonne", "Puerto Rico", "Ticket to Ride", and "Alhambra". Eurogames tend to be focused on economics and the acquisition of resources rather than direct conflict, and have a limited amount of luck. They also differ from abstract strategy games like chess by using themes tied to specific locales, and emphasize individual development and comparative achievement rather than direct conflict. Eurogames also emphasize the mechanical challenges of their systems over having the systems match the theme of the game. They are generally simpler than the wargames that flourished in the 1970s and 1980s from publishers such as SPI and Avalon Hill, but nonetheless often have a considerable depth of play. One consequence of the increasing popularity of this genre has been an expansion upwards in complexity. Games such as Puerto Rico that were considered quite complex when Eurogames proliferated in the U.S. after the turn of the millennium are now the norm, with newer high-end titles like Terra Mystica and Tzolkin being significantly more difficult to master. While many titles (especially the strategically heavier ones) are enthusiastically played by gamers as a hobby, Eurogames are, for the most part, well-suited to social play. In keeping with this social function, various characteristics of the games tend to support that aspect well, and these have become quite common across the genre. In contrast to games such as Risk or Monopoly, in which a close game can extend indefinitely, Eurogames usually have a mechanism to stop the game within its stated playing time. Common mechanisms include a pre-determined winning score, a set number of game turns, or depletion of limited game resources. Playing time varies from a half-hour to a few hours, with one to two hours being typical. Ra and Carcassonne have limited tiles to exhaust. Generally Eurogames do not have a fixed number of players like chess or bridge; although there is a sizeable body of German-style games that is designed for exactly two players, most games can accommodate anywhere from two to six players (with varying degrees of suitability). Six-player games are somewhat rare, or require expansions, as with The Settlers of Catan or Carcassonne. Players play for themselves individually, rather than in a partnership or team. Another prominent characteristic of these games is the lack of player elimination. Eliminating players before the end of the game is seen as contrary to the social aspect of such games. Most of these games are designed to keep all players in the game as long as possible, so it is rare to be certain of victory or defeat until relatively late in the game. Related to no-player-elimination, Eurogame scoring systems are often designed so that hidden scoring or end-of-game bonuses can catapult a player who appears to be in a lagging position at end of play into the lead. A second-order consequence is that Eurogames tend to have multiple paths to victory (dependent on aiming at different end-of-game bonuses) and it is often not obvious to other players which strategic path a player is pursuing. Balancing mechanisms are often integrated into the rules, giving slight advantages to lagging players and slight hindrances to the leaders. This helps to keep the game competitive to the very end. These games are designed for international audiences, so they are not word games and usually do not contain much text outside of the rules. Game components often use symbols and icons instead of words, reducing the amount of text to be translated between localized editions. Gameplay also tends to de-emphasize or entirely exclude verbal communication as a game element, with many games being fully playable if all players know the rules, even if they do not speak a common language. Some publishers design games that contain instructions and game elements in more than one language, e.g., the game Ursuppe comes with rules and cards in both German and English; Khronos features instructions in French, English, and German; and a Swiss game, Enchanted Owls, provides French, German, Italian, and Romansh rules. However, this is usually not the case if the rights to sell the game outside its country of origin are sold to another publisher. English editions are often available, either published in the US or co-published by a German company cooperating with a US company, or the reverse (example: Dominion). A wide variety of often innovative mechanisms or mechanics are used, and familiar mechanics such as rolling dice and moving, capture, or trick taking are avoided. If a game has a board, the board is usually irregular rather than uniform or symmetric (such as Risk rather than chess or Scrabble). The board is often random (as in The Settlers of Catan) or has random elements (such as Tikal). Some boards are merely mnemonic or organizational and contribute only to ease of play, such as a cribbage board; examples of this include Puerto Rico and Princes of Florence. Random elements are often present, but do not usually dominate the game. While rules are light to moderate, they allow depth of play, usually requiring thought, planning, and a shift of tactics through the game and often with a chess- or backgammon-like opening game, middle game, and end game. Stuart Woods' "Eurogames" cites six examples of mechanics common to eurogames: Eurogame designs tend to de-emphasize luck and random elements. Often, the only random element of the game will be resource or terrain distribution in the initial setup, or (less frequently) the random order of a set of event or objective cards. The role played by deliberately random mechanics in other styles of game is instead fulfilled by the unpredictability of the behavior of other players. Examples of themes are: Although not relevant to actual play, the name of the game's designer is often prominently mentioned on the box, or at least in the rule book. Top designers enjoy considerable following among enthusiasts of Eurogames. For this reason, the name "designer games" is often offered as a description of the genre. Recently, there has also been a wave of games designed as spin-offs of popular novels, such as the games taking their style from the German bestsellers Der Schwarm and Tintenherz. Designers of Eurogames include: The Internationale Spieltage, also known as Essen Spiel, or the Essen Games Fair, is the largest non-digital game convention in the world, and the place where the largest number of eurogames are released each year. Founded in 1983 and held each fall in Essen, Germany, the fair was founded with the objective of providing a venue for people to meet and play board games, and show gaming as an integral part of German culture. A "World Boardgaming Championships" is held annually in July in Pennsylvania, USA. The event is nine-days long and includes tournament tracks of over a hundred games; while traditional wargames are played there, all of the most popular tournaments are Eurogames and it is generally perceived as a Eurogame-centered event. Attendance is international, though players from the U.S. and Canada predominate. The most prestigious German board game award is the Spiel des Jahres ("game of the year"). The award is very family-oriented. Shorter, more approachable, games such as Ticket to Ride and Elfenland are usually preferred by the committee that gives out the award. In 2011, the jury responsible for the Spiel des Jahres created the Kennerspiel des Jahres, or connoisseur's game of the year, for more complex games. The Deutscher Spiele Preis ("German game prize") is also awarded to games that are more complex and strategic, such as Puerto Rico. However, there are a few games with broad enough appeal to win both awards: The Settlers of Catan (1995), Carcassonne (2001), Dominion (2009). Xbox Live Arcade has included popular games from the genre, with "Catan" being released to strong sales on May 13, 2007, "Carcassonne" being released on June 27, 2007. "Lost Cities" and "Ticket to Ride" soon followed. "Alhambra" was due to follow later in 2007 until being cancelled. The iPhone received versions of The Settlers of Catan and "Zooloretto" in 2009. Carcassonne was added to the iPhone App Store in June 2010. Later, Ticket to Ride was developed for both the iPhone and the iPad (boosting sales of the board game tremendously).
https://en.wikipedia.org/wiki?curid=12609
Grand Unified Theory A Grand Unified Theory (GUT) is a model in particle physics in which, at high energies, the three gauge interactions of the Standard Model that define the electromagnetic, weak, and strong interactions, or forces, are merged into a single force. Although this unified force has not been directly observed, the many GUT models theorize its existence. If unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct. Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single electroweak interaction. GUT models predict that at even higher energy, the strong interaction and the electroweak interaction will unify into a single electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a theory of everything (TOE) rather than a GUT. GUTs are often seen as an intermediate step towards a TOE. The novel particles predicted by GUT models are expected to have extremely high masses of around the GUT scale of formula_1 GeV —just a few orders of magnitude below the Planck scale of formula_2 GeV—and so are well beyond the reach of any foreseen particle collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly and instead the effects of grand unification might be detected through indirect observations such as proton decay, electric dipole moments of elementary particles, or the properties of neutrinos. Some GUTs, such as the Pati-Salam model, predict the existence of magnetic monopoles. While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to an existence of family symmetries beyond the conventional GUT models. Due to this, and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model. Models that do not unify the three interactions using one simple group as the gauge symmetry, but do so using semisimple groups, can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well. Historically, the first true GUT which was based on the simple Lie group , was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati, who pioneered the idea to unify gauge interactions. The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper. The "supposition" that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups and which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular the weak mixing angle, Grand Unification ideally reduces the number of independent input parameters, but is also constrained by observations. Grand Unification is reminiscent of the unification of electric and magnetic forces by Maxwell's theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different. is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature. The two smallest irreducible representations of are (the defining representation) and . In the standard assignment, the contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content. The hypothetical right-handed neutrinos are a singlet of , which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous symmetry breaking which explains why its mass would be heavy. (see seesaw mechanism). The next simple Lie group which contains the standard model is Here, the unification of matter is even more complete, since the irreducible spinor representation contains both the and of and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group which achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector). Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for and . Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation). The boson matrix for is found by taking the matrix from the representation of and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of . In some forms of string theory, including "E"8 × "E"8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi-Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT. Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be found. GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of types of particles. These can be put into representations of . This can be divided into which is the theory together with some heavy bosons which act on the generation number. GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of . Symplectic gauge groups could also be considered. For example, (which is called in the article symplectic group) has a representation in terms of quaternion unitary matrices which has a dimensional real representation and so might be considered as a candidate for a gauge group. has 32 charged bosons and 4 neutral bosons. Its subgroups include so can at least contain the gluons and photon of . Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be: A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed quaternion matrices is which does include the standard model bosons: If formula_7 is a quaternion valued spinor, formula_8 is quaternion hermitian matrix coming from and formula_9 is a pure imaginary quaternion (both of which are 4-vector bosons) then the interaction term is: It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann-) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (F4, E6, E7 or E8) depending on the details. Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that E6 has subgroup and so is big enough to include the Standard Model. An E8 gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of E8, these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possess theoretical problems. Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with the wrong statistics. Supersymmetry however does fit with Yang–Mills. For example, N=4 Super Yang Mills Theory requires an gauge group. The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group running, which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale. The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with or GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale: It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections. Since Majorana masses of the right-handed neutrino are forbidden by symmetry, GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios. Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes "all" fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are: Not quite GUTs: "Note": These models refer to Lie algebras not to Lie groups. The Lie group could be , just to take a random example. The most promising candidate is . (Minimal) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of . They are the minimal left-right model, , flipped and the Pati–Salam model. The GUT group E6 contains , but models based upon it are significantly more complicated. The primary reason for studying E6 models comes from heterotic string theory. GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model; proton decay has never been observed by experiments. The minimal experimental limit on the proton's lifetime pretty much rules out minimal and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models. Some GUT theories like and suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group. Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations. A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter. There is currently no hard evidence that nature is described by a Grand Unified Theory. The discovery of neutrino oscillations indicates that the Standard Model is incomplete and has led to renewed interest toward certain GUT such as . One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the 1034-1035 year range) have ruled out simpler GUTs and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at 6 x 1039 years for SUSY models and 1.4 x 1036 years for minimal non-SUSY GUTs. The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to 1016 GeV (slightly less than the Planck energy of 1019 GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) models break with an intermediate gauge scale, such as the one of Pati–Salam group.
https://en.wikipedia.org/wiki?curid=12610
GTE GTE Corporation, formerly General Telephone & Electronics Corporation (1955–1982), was the largest independent telephone company in the United States during the days of the Bell System. The company operated from 1926, with roots tracing further back than that, until 2000, when it was acquired by Bell Atlantic; the combined company took the name Verizon. The Wisconsin-based Associated Telephone Utilities Company was founded in 1926; it went bankrupt in 1933 during the Great Depression, and was reorganized as General Telephone in 1934. In 1991, it acquired the third largest independent, Continental Telephone (ConTel). It owned Automatic Electric, a telephone equipment supplier similar in many ways to Western Electric, and Sylvania Lighting, the only non-communications-oriented company under GTE ownership. GTE provided local telephone service to many areas of the U.S. through operating companies, much as American Telephone & Telegraph provided local telephone service through its 22 Bell Operating Companies. The company acquired BBN Planet, one of the earliest Internet service providers, in 1997. That division became known as GTE Internetworking, and was later spun off into the independent company Genuity (a name recycled from another Internet company GTE acquired in 1997) to satisfy Federal Communications Commission (FCC) requirements regarding the GTE-Bell Atlantic merger that created Verizon. GTE operated in Canada via large interests in subsidiary companies such as BC Tel and Quebec-Téléphone. When foreign ownership restrictions on telecommunications companies were introduced, GTE's ownership was grandfathered. When BC Tel merged with Telus (the name given the privatized Alberta Government Telephones (AGT)) to create BCT.Telus, GTE's Canadian subsidiaries were merged into the new parent, making it the second-largest telecommunications carrier in Canada. As such, GTE's successor, Verizon Communications, was the only foreign telecommunications company with a greater than 20% interest in a Canadian carrier, until Verizon completely divested itself of its shares in 2004. In the Caribbean, CONTEL purchased several major stakes in the newly independent countries of the British West Indies (namely in Barbados, Jamaica, and Trinidad and Tobago). Prior to GTE's merger with Bell Atlantic, GTE also maintained an interactive television service joint-venture called GTE mainStreet (sometimes also called mainStreet USA) as well as an interactive entertainment and video game publishing operation, GTE Interactive Media. In 1918, Wisconsin public utility accountants John F. O'Connell, Sigurd L. Odegard, and John A. Pratt pooled $33,500 to purchase the Richland Center Telephone Company, serving 1,466 telephones in Wisconsin's dairy belt. In 1920, the three accountants formed Commonwealth Telephone Company as the parent of Richland Center Telephone, with Odegard as president, Pratt as vice-president, and O'Connell as secretary. In 1922, Pratt resigned as vice-president and was replaced by Clarence R. Brown, a former Bell System employee. Commonwealth Telephone expanded across southern Wisconsin, and made its first purchase outside the state later in the decade when it bought Belvidere Telephone Company in Illinois. It also diversified by acquiring two electric utilities in Wisconsin. Expansion was stepped up in 1926, when Odegard secured an option to purchase Associated Telephone Company of Long Beach, California. Later that year, Commonwealth Telephone and Associated Telephone merged as Associated Telephone Utilities. During its first six years, Associated Telephone Utilities acquired 340 telephone companies in the West, Midwest and East, which were consolidated into 45 companies operating more than 437,000 telephones in 25 states. By the time the stock market bottomed out in October 1929, Associated Telephone Utilities was operating about 500,000 telephones with revenues approaching $17 million. In January 1930, a new subsidiary, Associated Telephone Investment Company, was established. Designed to support its parent's acquisition program, the new company's primary business was buying company stock in order to bolster its market value. Within two years, the investment company had incurred major losses, and a $1 million loan had to be negotiated. Associated Telephone Investment dissolved, but not soon enough to keep Associated Telephone from lapsing into receivership in 1933. The company was reorganized that same year, and two years later was reorganized as General Telephone Corporation, operating 12 newly consolidated companies. John Winn, a 26-year veteran of the Bell System, was named president. In 1936, General Telephone created a new subsidiary, General Telephone Directory Company, to publish directories for the parent's entire service area. Like other businesses, the telephone industry was under government restrictions during World War II, and General Telephone was called upon to increase services at military bases and war-production factories. Following the war, General Telephone reactivated an acquisitions program that had been dormant for more than a decade and purchased 118,000 telephone lines between 1946 and 1950. In 1950, General Telephone purchased its first telephone-equipment manufacturing subsidiary, Leich Electric Company, along with the related Leich Sales Corporation. General Telephone’s holdings included 15 telephone companies across 20 states by 1951, when Donald C. Power (attorney, utilities commissioner and former executive secretary for Ohio Governor John Bricker) was named president of the company under chairman and long-time GT executive Morris F. LaCroix, replacing the retiring Harold Bozell (president 1940 - 1951). Power proceeded to expand the company through the 1950s principally through two acquisitions. In 1955, Theodore Gary & Company, the second-largest independent telephone company, which had 600,000 telephone lines, was merged into General Telephone, which had grown into the largest independent outside the Bell System. The merger gave the company 2.5 million lines. Theodore Gary's assets included telephone operations in the Dominican Republic, British Columbia, and the Philippines, as well as Automatic Electric, the second-largest telephone equipment manufacturer in the U.S. It also had a subsidiary, named the General Telephone and Electric Corporation, formed in 1930 with the Transamerica Corporation and British investors to compete against ITT. In 1959, General Telephone and Sylvania Electric Products merged, and the parent's name was changed to General Telephone & Electronics Corporation (GT&E). The merger gave Sylvania - a leader in such industries as lighting, television and radio, and chemistry and metallurgy - the needed capital to expand. For General Telephone, the merger meant the added benefit of Sylvania's extensive research and development capabilities in the field of electronics. Power also orchestrated other acquisitions in the late 1950s, including Peninsular Telephone Company in Florida, with 300,000 lines, and Lenkurt Electric Company, Inc., a leading producer of microwave and data transmissions systems. In 1960, the subsidiary GT&E International Incorporated was formed to consolidate manufacturing and marketing activities of Sylvania, Automatic Electric, and Lenkurt, outside the United States. Power was named C.E.O. and chairman in 1961, making way for Leslie H. Warner, formerly of Theodore Gary, to become president. During the next several years, the scope of GT&E's research, development, and marketing activities was broadened. In 1963, Sylvania began full-scale production of color television picture tubes, and within two years, it was supplying color tubes for 18 of the 23 domestic U.S. television manufacturers. About the same time, Automatic Electric began supplying electronic switching equipment for the U.S. defense department's global communications systems, and GT&E International began producing earth-based stations for both foreign and domestic markets. GT&E's telephone subsidiaries, meanwhile, began acquiring community-antenna television systems (CATV) franchises in their operating areas. In 1964, Warner orchestrated a deal that merged Western Utilities Corporation, the nation's second-largest independent telephone company, with 635,000 telephones, into GT&E. The following year Sylvania introduced the revolutionary four-sided flashcube, enhancing its position as the world's largest flashbulb producer. Acquisitions in telephone service continued under Warner during the mid-1960s. Purchases included Quebec Telephone in Canada, Hawaiian Telephone Company, and Northern Ohio Telephone Company and added a total of 622,000 telephone lines to GT&E operations. By 1969, GT&E was serving ten million telephones. In the late 1960s, GT&E joined in the search for a railroad car Automatic Car Identification system. It designed the KarTrak optical system, which won over other manufacturer's systems in field trials, but ultimately proved to need too much maintenance. In the late 1970s the system was abandoned. In March 1970, GT&E's New York City headquarters was bombed by a radical antiwar group in protest of the company's participation in defense work. In December of that year the GT&E board agreed to move the company's headquarters to Stamford, Connecticut. In 1971 GT&E undertook an identity change and became simply GTE, while Sylvania Electric Products became GTE Sylvania. The same year, Donald C. Power retired and Leslie H. Warner became chairman of the Board. Theodore F. Brophy was brought in as president. After first proposing to build separate satellite systems, GTE and its telecommunications rival, American Telephone & Telegraph, announced in 1974 joint venture plans for the construction and operation of seven earth-based stations interconnected by two satellites. Also in 1974 Sylvania acquired name and distribution rights for Philco television and stereo products. GTE International expanded its activities during the same period, acquiring television manufacturers in Canada and Israel and a telephone manufacturer in Germany. In 1976, newly elected chairman Theodore F. Brophy reorganized the company along five global product lines: communications, lighting, consumer electronics, precision materials, and electrical equipment. GTE International was phased out during the reorganization, and GTE Products Corporation was formed to encompass both domestic and foreign manufacturing and marketing operations. At the same time, GTE Communications Products was formed to oversee operations of Automatic Electric, Lenkurt, Sylvania, and GTE Information Systems. In 1979, another reorganization soon followed under new president Thomas A. Vanderslice. GTE Products Group was eliminated as an organizational unit and GTE Electrical Products, consisting of lighting, precision materials, and electrical equipment, was formed. Vanderslice also revitalized the GT&E Telephone Operating Group in order to develop competitive strategies for anticipated regulatory changes in the telecommunications industry. In 1979, GTE purchased Telenet to establish a presence in the growing packet switching data communications business. GTE Telenet was later included in the US Telecom joint venture. GT&E sold its consumer electronics businesses, including the accompanying brand names of Philco and Sylvania to Philips in 1981, after watching revenues from television and radio operations decrease precipitously with the success of foreign manufacturers. Following AT&T's 1982 announcement that it would divest 22 telephone operating companies, GT&E made a number of reorganization moves. In 1982, the company adopted the name GTE Corporation and formed GTE Mobilnet Incorporated to handle the company's entrance into the new cellular telephone business. In 1983 GTE sold its electrical equipment, brokerage information services, and cable television equipment businesses. That same year, Automatic Electric and Lenkurt were combined as GTE Network Systems. GTE became the third-largest long-distance telephone company in 1983 through the acquisition of Southern Pacific Communications Company. At the same time, Southern Pacific Satellite Company was acquired, and the two firms were renamed GTE Sprint Communications Corporation and GTE Spacenet Corporation, respectively. Through an agreement with the Department of Justice, GTE conceded to keep Sprint Communications separate from its other telephone companies and limit other GTE telephone subsidiaries in certain markets. In December 1983 Vanderslice resigned as president and chief operating officer. In 1984, GTE formalized its decision to concentrate on three core businesses: telecommunications, lighting, and precision metals. That same year, the company's first satellite was launched, and GTE's cellular telephone service went into operation; GTE's earnings exceeded $1 billion for the first time. In 1986, GTE acquired Airfone Inc., a telephone service provider for commercial aircraft and railroads, and Rotaflex plc, a United Kingdom-based manufacturer of lighting fixtures. Beginning in 1986, GTE spun off several operations to form joint ventures. That same year, GTE Sprint and United Telecommunication's long-distance subsidiary, US Telecom, agreed to merge and form US Sprint Communications Company, with each parent retaining a 50 percent interest in the new firm. That same year, GTE transferred its international transmission, overseas central office switching, and business systems operations to a joint venture with Siemens AG of Germany, which took 80 percent ownership of the new firm. The following year, GTE transferred its business systems operations in the United States to a new joint venture, Fujitsu GTE Business Systems, Inc., formed with Fujitsu Limited, which retained 80 percent ownership. In April 1988, after the retirement of Theodore F. Brophy, James L. "Rocky" Johnson was promoted from his position as president and chief operating officer to CEO of GTE, he was appointed chairman in 1991. Under his leadership, GTE divested its consumer communications products unit as part of a telecommunications strategy to place increasing emphasis on the services sector. The following year GTE sold the majority of its interest in US Sprint to United Telecommunications and its interest in Fujitsu GTE Business Systems to Fujitsu. In 1989, GTE and AT&T formed the joint venture company AG Communication Systems Corporation, designed to bring advanced digital technology to GTE's switching systems. GTE retained 51 percent control over the joint venture, with AT&T pledging to take complete control of the new firm in 15 years. With an increasing emphasis on telecommunications, in 1989 GTE launched a program to become the first cellular provider offering nationwide service and introduced the nation's first rural service area, providing cellular service on the Hawaiian island of Kauai. The following year GTE acquired the Providence Journal Company's cellular properties in five southern states for $710 million and became the second largest cellular-service provider in the United States. In 1990, GTE reorganized its activities around three business groups: telecommunications products and services, telephone operations, and electrical products. That same year, GTE and Contel Corporation announced merger plans that would strengthen GTE's telecommunications and telephone sectors. Following action or review by more than 20 governmental bodies, in March 1991 the merger of GTE and Contel was approved. Over half of Contel's $6.6 billion purchase price, $3.9 billion, was assumed debt. In April 1992, James L. "Rocky" Johnson retired after 43 years at GTE, remaining on the GTE board of directors as Chairman Emeritus. Charles "Chuck" Lee was named to succeed Johnson. Lee's first order of business was reduction of that obligation. He sold GTE's North American Lighting business to a Siemens affiliate for over $1 billion, shaved off local exchange properties in Idaho, Tennessee, Utah, and West Virginia to generate another $1 billion, and divested its interest in Sprint in 1992. In 1994, he sold its GTE Spacenet satellite operations to General Electric and sold Contel of Maine to Oxford Networks, which placed the company into a newly created subsidiary, Oxford West Telephone. The Telecommunications Act of 1996 promised to encourage competition among local phone providers, long distance services, and cable television companies. Many leading telecoms prepared for the new competitive realities by aligning themselves with entertainment and information providers. GTE, on the other hand, continued to focus on its core operations, seeking to make them as efficient as possible. Among other goals, GTE's plan sought to double revenues and slash costs by $1 billion per year by focusing on five key areas of operation: technological enhancement of wireline and wireless systems, expansion of data services, global expansion, and diversification into video services. GTE hoped to cross-sell its large base of wireline customers on wireless, data and video services, launching Tele-Go, a user-friendly service that combined cordless and cellular phone features. The company bought broadband spectrum cellular licenses in Atlanta, Seattle, Cincinnati and Denver, and formed a joint venture with SBC Communications to enhance its cellular capabilities in Texas. In 1995, the company undertook a 15-state test of video conferencing services, as well as a video dialtone (VDT) experiment that proposed to offer cable television programming to 900,000 homes by 1997. GTE also formed a video programming and interservices joint venture with Ameritech Corporation, BellSouth Corporation, SBC, and The Walt Disney Company in the fall of 1995. Foreign efforts included affiliations with phone companies in Argentina, Mexico, Germany, Japan, Canada, the Dominican Republic, Venezuela and China. The early 1990s reorganization included a 37.5 percent workforce reduction, from 177,500 in 1991 to 111,000 by 1994. Lee's fivefold strategy had begun to bear fruit by the mid-1990s. While the communication conglomerate's sales remained rather flat, at about $19.8 billion, from 1992 through 1994, its net income increased by 43.7 percent, from $1.74 billion to a record $2.5 billion, during the same period. Bell Atlantic acquired GTE on June 30, 2000, and named the new entity Verizon Communications. The GTE operating companies retained by Verizon are now collectively known as Verizon West division of Verizon (including east coast service territories). The remaining smaller operating companies were sold off or transferred into the remaining ones. Additional properties were sold off within a few years after the merger to CenturyTel, Alltel, and Hawaiian Telcom. On July 1, 2010, Verizon sold many former GTE properties to Frontier Communications. Other GTE territories in California, Florida, and Texas were sold to Frontier in 2015 and transferred in 2016, thus ending Verizon's landline operations outside of the historic Bell Atlantic footprint. Verizon still operates phone service in non-Bell System areas in Pennsylvania under Verizon North, and in non-Bell System areas in Virginia and Knotts Island, North Carolina under Verizon South. Prior to the acquisition with Bell Atlantic, GTE owned the following operating companies in the US: Following the acquisition with Bell Atlantic, some of these companies and/or access lines have been sold off to other companies, such as Alltel, ATEAC, The Carlyle Group, CenturyTel, Citizens/Frontier Communications, and Valor Telecom.
https://en.wikipedia.org/wiki?curid=12611
General aviation General aviation (GA) represents all civil aviation "aircraft operation other than a commercial air transport or an aerial work operation". The International Civil Aviation Organization (ICAO) defines civil aviation aircraft operations in three categories: General Aviation (GA), Aerial Work (AW) and Commercial Air Transport (CAT). General aviation thus represents the 'private transport' and recreational components of aviation. It also includes activities surrounding aircraft homebuilding, flight training, flying clubs, and aerial application, as well as forms of charitable and humanitarian transportation. Private flights are made in a wide variety of aircraft: light and ultra-light aircraft, sport aircraft, business aircraft (like private jets), gliders and helicopters. Flights can be carried out under both visual flight (VFR) and instrument flight (IFR) rules, and can use controlled airspace with permission. Aerial work operations are separated from general aviation by ICAO in its definition. These activities include agriculture, construction, photography, surveying, observation and patrol, search and rescue, and aerial advertisement. However ICAO has considered officially extending the definition of general aviation to include aerial work to reflect common usage and for statistical purposes. The International Council of Aircraft Owner and Pilot Associations (IAOPA) includes the following definitions for General Aviation aircraft activities: The majority of the world's air traffic falls into the category of general aviation, and most of the world's airports serve GA exclusively. In 2003 the European Aviation Safety Agency (EASA) was established as the central EU regulator, taking over responsibility for legislating airworthiness and environmental regulation from the national authorities. Of the 21,000 civil aircraft registered in the UK, 96 percent are engaged in GA operations, and annually the GA fleet accounts for between 1.25 and 1.35 million hours flown. There are 28,000 Private Pilot Licence holders, and 10,000 certified glider pilots. Some of the 19,000 pilots who hold professional licences are also engaged in GA activities. GA operates from more than 1,800 airports and landing sites or aerodromes, ranging in size from large regional airports to farm strips. GA is regulated by the Civil Aviation Authority (CAA), although regulatory powers are being increasingly transferred to the European Aviation Safety Agency (EASA). The main focus is on standards of airworthiness and pilot licensing, and the objective is to promote high standards of safety. General aviation is particularly popular in North America, with over 6,300 airports available for public use by pilots of general aviation aircraft (around 5,200 airports in the U.S., and over 1,000 in Canada). In comparison, scheduled flights operate from around 560 airports in the U.S. According to the U.S. Aircraft Owners and Pilots Association (AOPA), general aviation provides more than one percent of the United States' GDP, accounting for 1.3 million jobs in professional services and manufacturing. Most countries have authorities that oversee all civil aviation, including general aviation, adhering to the standardized codes of the International Civil Aviation Organization (ICAO). Examples include the Federal Aviation Administration (FAA) in the United States, the Civil Aviation Authority (CAA) in the United Kingdom, Civil Aviation Authority of Zimbabwe (CAAZ) in Zimbabwe, the Luftfahrt-Bundesamt (LBA) in Germany, the Bundesamt für Zivilluftfahrt in Switzerland, Transport Canada in Canada, the Civil Aviation Safety Authority (CASA) in Australia, the Directorate General of Civil Aviation (DGCA) in India and Iran Civil Aviation Organization in Iran. Aviation accident rate statistics are necessarily estimates. According to the U.S. National Transportation Safety Board, general aviation in the United States (excluding charter) suffered 1.31 fatal accidents for every 100,000 hours of flying in 2005, compared to 0.016 for scheduled airline flights. In Canada, recreational flying accounted for 0.7 fatal accidents for every 1000 aircraft, while air taxi accounted for 1.1 fatal accidents for every 100,000 hours. More experienced GA pilots appear generally safer, although the relations between flight hours, accident frequency, and accident rates are complex and often difficult to assess. A small number of commercial aviation accidents in the United States have involved collisions with general aviation flights, notably TWA Flight 553, Piedmont Airlines Flight 22, Allegheny Airlines Flight 853, PSA Flight 182 and Aeromexico Flight 498.
https://en.wikipedia.org/wiki?curid=12612
Gracchi The Gracchi brothers, Tiberius and Gaius, were Romans who both served as tribunes of the plebs between 133 and 121 BC. They attempted to redistribute the occupation of the "ager publicus"—the public land hitherto controlled principally by aristocrats—to the urban poor and veterans, in addition to other social and constitutional reforms. After achieving some early success, both were assassinated by the Optimates, the conservative faction in the senate that opposed these reforms. The brothers were born to a plebeian branch of the old and noble Sempronia family. Their father was the elderly Tiberius Gracchus the Elder (or Tiberius Sempronius Gracchus) who was tribune of the plebs, praetor, consul, and censor. Their mother was Cornelia, daughter of Scipio Africanus, himself considered a hero by the Roman people for his part in the war against Carthage. Their parents had 12 children, but only one daughter—who later married Scipio Aemilianus (Scipio Africanus the Younger)—and two sons, Tiberius and Gaius, survived childhood. After the boys' father died while they were young, responsibility for their education fell to their mother. Cornelia ensured that the brothers had the best available Greek tutors, teaching them oratory and political science. The brothers were also well trained in martial pursuits; in horsemanship and combat they outshone all their peers. The older brother Tiberius was elected an augur at only 16 – according to the historian J. C. Stobart, had he taken the easy path rather than the cause of radical reform, he would have been clearly destined for consulship. Tiberius was the most distinguished young officer in the Third Punic War, Rome's last campaign against Carthage. He was the first to scale Carthage's walls; before that he saved an army of 20,000 men by skilled diplomacy. As the boys grew up, they developed strong connections with the ruling elite. Central to the Gracchi reforms was an attempt to address economic distress and its military consequences. Much public land ("ager publicus") had been divided among large landholders and speculators who further expanded their estates by driving peasants off their farms. While their old lands were being worked by slaves, the peasants were often forced into idleness in Rome where they had to subsist on handouts due to a scarcity of paid work. They could not legally join the army because they did not meet the property qualification; and this, together with the lack of public land to give in exchange for military service and the mutinies in the Numantine War, caused recruitment problems and troop shortages. The Gracchi aimed to address these problems by reclaiming lands from wealthy members of the senatorial class that could then be granted to soldiers; by restoring land to displaced peasants; by providing subsidized grain for the needy and by having the Republic pay for the clothing of its poorest soldiers. Tiberius was elected to the office of Tribune of the Plebs in 133 BC. He immediately began pushing for a programme of land reform, partly by invoking the 240-year-old Sextian-Licinian law that limited the amount of land that could be owned by a single individual. Using the powers of Lex Hortensia, Tiberius established a commission to oversee the redistribution of land holdings from the rich to the unlanded urban poor. The commission consisted of himself, his father-in-law and his brother Gaius. Even liberal senators were agitated by the proposed changes, fearing their own lands would be confiscated. Senators arranged for other tribunes to oppose the reforms. Tiberius then appealed to the people, and argued that a tribune who opposes the will of the people in favour of the rich is not a true tribune. The senators were left with only one constitutional response – to threaten prosecution after Tiberius's term as a tribune ended. This meant Tiberius had to stand for a second term. The senators obstructed his re-election. They also gathered an "ad hoc" force, with several of them personally marching to the Forum, and had Tiberius and some 300 of his supporters clubbed to death. This was the first open bloodshed in Roman politics in nearly four centuries. Tiberius's land reform commission continued distributing lands, albeit much more slowly than Tiberius had envisaged, as Senators were able to eliminate more of the commission's supporters by legal means. Ten years later, in 123 BC, Gaius took the same office as his brother, as a Tribune of the Plebs. Gaius was more practically minded than Tiberius and consequently was considered more dangerous by the senatorial class. He gained support from the agrarian poor by reviving the land reform programme and from the urban poor with various popular measures. He also sought support from the second estate, those equestrians who had not ascended to become senators. Many equestrians were publicans, in charge of tax collecting in the Roman province of Asia (located in western Anatolia), and of contracting for construction projects. The equestrian class would get to control a court that tried senators for misconduct in provincial administration. In effect, the equestrians replaced senators already serving at the court. Thus, Gaius became an opponent of senatorial influence. Other reforms implemented by Gaius included fixing prices on grain for the urban population and granting improvements in citizenship for Latins and others outside the city of Rome. With this broad coalition of supporters, Gaius held his office for two years and had much of his prepared legislation passed. This included winning an unconstitutional, although not necessarily illegal, re-election to the one-year office of Tribune. However Gaius's plans to extend rights to non-Roman Italians were eventually vetoed by another Tribune. A substantial proportion of the Roman poor, protective of their privileged Roman citizenship, turned against Gaius. With Gaius's support from the people weakened, the consul Lucius Opimius was able to crush the Gracchan movement by force. A mob was raised to assassinate Gaius. Knowing his death was imminent, he committed suicide on the Aventine Hill in 121 BC. All of his reforms were undermined except for the grain laws. Three thousand supporters were subsequently arrested and put to death in the proscriptions that followed. According to the classicist J. C. Stobart, Tiberius's Greek education had caused him to overestimate the reliability of the people as a power base, causing him to overplay his hand. In Rome, even when led by a bold Tribune, the people enjoyed much less influence than at the height of the Athenian democracy. Another problem for Gaius's aims was that the Roman constitution, specifically the Tribal Assembly, was designed to prevent any one individual governing for a sustained period of time – and there were several other checks and balances to prevent power being concentrated on any one person. Stobart adds that another reason for the failure was the Gracchi's idealism: they were deaf to the baser notes of human nature and failed to recognize how corrupt and selfish all sections of Roman society had become. According to Oswald Spengler, the characteristic mistake of the Gracchan age was to believe in the possibility of the reversibility of history – a form of idealism which according to Spengler was at that time shared by both sides of the political spectrum – Cato had sought to turn back the clock to the time of Cincinnatus, and restore virtue by returning to austerity. The philosopher Simone Weil ranked the conduct of the Gracchi second out of all the known cases of good-hearted conduct recorded by history for classical Rome, ahead of the Scipios and Virgil. Historian Michael Crawford attributes the disappearance of much of Tiberius Gracchus' support to the reduced level of citizen participation due to dispersal far from Rome, and sees his tribunate as marking a step in the Hellenization of the Roman aristocracy. Crawford asserts that Gaius Gracchus' extortion law shifted the balance of power in Rome and that the Gracchi made available a new political armoury which the oligarchy subsequently sought to exploit. The emergence of new forces of urban factions, rural voters, and others, engaging in continued conflict with each other for their own interests, meant that the problem of effective governance awaited resolution. The reforms of the Gracchi had come to an end by violence; and this provided a brutal precedent that would be followed by many future rulers of Rome.
https://en.wikipedia.org/wiki?curid=12615
Gossip Gossip is a mass medium or rumor, especially about the personal or private affairs of others; the act is also known as dishing or tattling. Gossip has been researched in terms of its origins in evolutionary psychology, which has found gossip to be an important means for people to monitor cooperative reputations and so maintain widespread indirect reciprocity. Indirect reciprocity is a social interaction in which one actor helps another and is then benefited by a third party. Gossip has also been identified by Robin Dunbar, an evolutionary biologist, as aiding social bonding in large groups. The word is from Old English "godsibb", from "god" and "sibb", the term for the godparents of one's child or the parents of one's godchild, generally very close friends. In the 16th century, the word assumed the meaning of a person, mostly a woman, one who delights in idle talk, a newsmonger, a tattler. In the early 19th century, the term was extended from the talker to the conversation of such persons. The verb "to gossip", meaning "to be a gossip", first appears in Shakespeare. The term originates from the bedroom at the time of childbirth. Giving birth used to be a social event exclusively attended by women. The pregnant woman's female relatives and neighbours would congregate and idly converse. Over time, gossip came to mean talk of others. Others say that gossip comes from the same root as "gospel" — it is a contraction of "good spiel", meaning a good story. Gossip can: Mary Gormandy White, a human resource expert, gives the following "signs" for identifying workplace gossip: White suggests "five tips ... [to] handle the situation with aplomb: Peter Vajda identifies gossip as a form of workplace violence, noting that it is "essentially a form of attack." Gossip is thought by many to "empower one person while disempowering another" (Hafen). Accordingly, many companies have formal policies in their employee handbooks against gossip. Sometimes there is room for disagreement on exactly what constitutes unacceptable gossip, since workplace gossip may take the form of offhand remarks about someone's tendencies such as "He always takes a long lunch," or "Don’t worry, that’s just how she is." TLK Healthcare cites as examples of gossip, "tattletailing to the boss without intention of furthering a solution or speaking to co-workers about something someone else has done to upset us." Corporate email can be a particularly dangerous method of gossip delivery, as the medium is semi-permanent and messages are easily forwarded to unintended recipients; accordingly, a Mass High Tech article advised employers to instruct employees against using company email networks for gossip. Low self-esteem and a desire to "fit in" are frequently cited as motivations for workplace gossip. There are five essential functions that gossip has in the workplace (according to DiFonzo & Bordia): According to Kurkland and Pelled, workplace gossip can be very serious depending upon the amount of power that the gossiper has over the recipient, which will in turn affect how the gossip is interpreted. There are four types of power that are influenced by gossip: Some negative consequences of workplace gossip may include: Turner and Weed theorize that among the three main types of responders to workplace conflict are attackers who cannot keep their feelings to themselves and express their feelings by attacking whatever they can. Attackers are further divided into up-front attackers and behind-the-back attackers. Turner and Weed note that the latter "are difficult to handle because the target person is not sure of the source of any criticism, nor even always sure that there is criticism." It is possible however, that there may be illegal, unethical, or disobedient behavior happening at the workplace and this may be a case where reporting the behavior may be viewed as gossip. It is then left up to the authority in charge to fully investigate the matter and not simply look past the report and assume it to be workplace gossip. Informal networks through which communication occurs in an organization are sometimes called the grapevine. In a study done by Harcourt, Richerson, and Wattier, it was found that middle managers in several different organizations believed that gathering information from the grapevine was a much better way of learning information than through formal communication with their subordinates (Harcourt, Richerson & Wattier). Some see gossip as trivial, hurtful and socially and/or intellectually unproductive. Some people view gossip as a lighthearted way of spreading information. A feminist definition of gossip presents it as "a way of talking between women, intimate in style, personal and domestic in scope and setting, a female cultural event which springs from and perpetuates the restrictions of the female role, but also gives the comfort of validation." (Jones, 1990:243) In Early Modern England the word "gossip" referred to companions in childbirth, not limited to the midwife. It also became a term for women-friends generally, with no necessary derogatory connotations. (OED n. definition 2. a. "A familiar acquaintance, friend, chum", supported by references from 1361 to 1873). It commonly referred to an informal local sorority or social group, who could enforce socially acceptable behaviour through private censure or through public rituals, such as "rough music", the cucking stool and the skimmington ride. In Thomas Harman’s "Caveat for Common Cursitors" 1566 a ‘walking mort’ relates how she was forced to agree to meet a man in his barn, but informed his wife. The wife arrived with her “five furious, sturdy, muffled gossips” who catch the errant husband with “his hosen about his legs” and give him a sound beating. The story clearly functions as a morality tale in which the gossips uphold the social order. In Sir Herbert Maxwell Bart's The Chevalier of the Splendid Crest [1900] at the end of chapter three the king is noted as referring to his loyal knight "Sir Thomas de Roos" in kindly terms as "my old gossip". Whilst a historical novel of that time the reference implies a continued use of the term "Gossip" as childhood friend as late as 1900. Judaism considers gossip spoken without a constructive purpose (known in Hebrew as an evil tongue, "lashon hara") as a sin. Speaking negatively about people, even if retelling true facts, counts as sinful, as it demeans the dignity of man — both the speaker and the subject of the gossip. According to "Proverbs" 18:8: "The words of a gossip are like choice morsels: they go down to a man's innermost parts." The Christian perspective on gossip is typically based on modern cultural assumptions of the phenomenon, especially the assumption that generally speaking, gossip is negative speech. However, due to the complexity of the phenomenon, biblical scholars have more precisely identified the form and function of gossip, even identifying a socially positive role for the social process as it is described in the New Testament. Of course, this does not mean that there are "not" numerous texts in the New Testament that see gossip as dangerous negative speech. Thus, for example, the Epistle to the Romans associates gossips ("backbiters") with a list of sins including sexual immorality and with murder: According to Matthew 18, Jesus also taught that conflict resolution among church members ought to begin with the aggrieved party attempting to resolve their dispute with the offending party alone. Only if this did not work would the process escalate to the next step, in which another church member would become involved. After that if the person at fault still would not "hear", the matter was to be fully investigated by the church elders, and if not resolved to be then exposed publicly. Based on texts like these portraying gossip negatively, many Christian authors generalize on the phenomenon. So, in order to gossip, writes Phil Fox Rose, we "must harden our heart towards the 'out' person. We draw a line between ourselves and them; define them as being outside the rules of Christian charity... We create a gap between ourselves and God's Love." As we harden our heart towards more people and groups, he continues, "this negativity and feeling of separateness will grow and permeate our world, and we'll find it more difficult to access God’s love in any aspect of our lives." The New Testament is also in favor of group accountability (Ephesians 5:11; 1st Tim 5:20; James 5:16; Gal 6:1-2; 1 Cor 12:26), which may be associated with gossip. Islam considers backbiting the equivalent of eating the flesh of one's dead brother. According to Muslims, backbiting harms its victims without offering them any chance of defense, just as dead people cannot defend against their flesh being eaten. Muslims are expected to treat others like brothers (regardless of their beliefs, skin color, gender, or ethnic origin), deriving from Islam's concept of brotherhood amongst its believers. Bahais consider backbiting to be the "worst human quality and the most great sin..." Therefore, even murder would be considered less reprobate than backbiting. Baha'u'llah stated, "Backbiting quencheth the light of the heart, and extinguisheth the life of the soul." When someone kills another, it only affects their physical condition. However, when someone gossips, it affects one in a different manner. From Robin Dunbar's evolutionary theories, gossip originated to help bond the groups that were constantly growing in size. To survive, individuals need alliances; but as these alliances grew larger, it was difficult if not impossible to physically connect with everyone. Conversation and language were able to bridge this gap. Gossip became a social interaction that helped the group gain information about other individuals without personally speaking to them.   It enabled people to keep up with what was going on in their social network. It also creates a bond between the teller and the hearer, as they share information of mutual interest and spend time together. It also helps the hearer learn about another individual’s behavior and helps them have a more effective approach to their relationship. Dunbar (2004) found that 65% of conversations consist of social topics. Dunbar (1994) argues that gossip is the equivalent of social grooming often observed in other primate species. Anthropological investigations indicate that gossip is a cross-cultural phenomenon, providing evidence for evolutionary accounts of gossip. There is very little evidence to suggest meaningful sex differences in the proportion of conversational time spent gossiping, and when there is a difference, women are only very slightly more likely to gossip compared with men. Further support for the evolutionary significance of gossip comes from a recent study published in the peer-reviewed journal, Science Anderson and colleagues (2011) found that faces paired with negative social information dominate visual consciousness to a greater extent than positive and neutral social information during a binocular rivalry task. Binocular rivalry occurs when two different stimuli are presented to each eye simultaneously and the two percepts compete for dominance in visual consciousness. While this occurs, an individual will consciously perceive one of the percepts while the other is suppressed. After a time, the other percept will become dominant and an individual will become aware of the second percept. Finally, the two percepts will alternate back and forth in terms of visual awareness. The study by Anderson and colleagues (2011) indicates that higher order cognitive processes, like evaluative information processing, can influence early visual processing. That only negative social information differentially affected the dominance of the faces during the task alludes to the unique importance of knowing information about an individual that should be avoided. Since the positive social information did not produce greater perceptual dominance of the matched face indicates that negative information about an individual may be more salient to our behavior than positive. Gossip also gives information about social norms and guidelines for behavior. Gossip usually comments on how appropriate a behavior was, and the mere act of repeating it signifies its importance. In this sense, gossip is effective regardless of whether it is positive or negative Some theorists have proposed that gossip is actually a pro-social behavior intended to allow an individual to correct their socially prohibitive behavior without direct confrontation of the individual. By gossiping about an individual’s acts, other individuals can subtly indicate that said acts are inappropriate and allow the individual to correct their behavior (Schoeman 1994). Individuals who are perceived to engage in gossiping regularly are seen as having less social power and being less liked. The type of gossip being exchanged also affects likeability, whereby those who engage in negative gossip are less liked than those who engage in positive gossip. In a study done by Turner and colleagues (2003), having a prior relationship with a gossiper was not found to protect the gossiper from less favorable personality-ratings after gossip was exchanged. In the study, pairs of individuals were brought into a research lab to participate. Either the two individuals were friends prior to the study or they were strangers scheduled to participate at the same time. One of the individuals was a confederate of the study, and they engaged in gossiping about the research assistant after she left the room. The gossip exchanged was either positive or negative. Regardless of gossip type (positive versus negative) or relationship type (friend versus stranger) the gossipers were rated as less trustworthy after sharing the gossip. Walter Block has suggested that while gossip and blackmail both involve the disclosure of unflattering information, the blackmailer is arguably ethically superior to the gossip. Block writes: "In a sense, the gossip is much "worse" than the blackmailer, for the blackmailer has given the blackmailed a chance to silence him. The gossip exposes the secret without warning." The victim of a blackmailer is thus offered choices denied to the subject of gossip, such as deciding if the exposure of his or her secret is worth the cost the blackmailer demands. Moreover, in refusing a blackmailer's offer one is in no worse a position than with the gossip. Adds Block, "It is indeed difficult, then, to account for the vilification suffered by the blackmailer, at least compared to the gossip, who is usually dismissed with slight contempt and smugness." Contemporary critiques of gossip may concentrate on or become subsumed in the discussion of social media such as Facebook.
https://en.wikipedia.org/wiki?curid=12616
Gothic fiction Gothic fiction, which is largely known by the subgenre of Gothic horror, is a genre or mode of literature and film that combines fiction and horror, death, and at times romance. Its origin is attributed to English author Horace Walpole, with his 1764 novel "The Castle of Otranto", subtitled (in its second edition) "A Gothic Story". Gothic fiction tends to place emphasis on both emotion and a pleasurable kind of terror, serving as an extension of the Romantic literary movement that was relatively new at the time that Walpole's novel was published. The most common of these "pleasures" among Gothic readers was sublime—an indescribable feeling that "takes us beyond ourselves." The literary genre originated in England in the second half of the 18th century where, following Walpole, it was further developed by Clara Reeve, Ann Radcliffe, William Thomas Beckford and Matthew Lewis. The genre had much success in the 19th century, as witnessed in prose by Mary Shelley's "Frankenstein" and the works of Edgar Allan Poe as well as Charles Dickens with his novella, "A Christmas Carol", and in poetry in the work of Samuel Taylor Coleridge, and Lord Byron. Another well known novel in this genre, dating from the late Victorian era, is Bram Stoker's "Dracula". The name "Gothic", which originally referred to the Goths, and then came to mean "German", refers to the Gothic architecture of the medieval era of European history, in which many of these stories take place. This extreme form of Romanticism was very popular throughout Europe, especially among English- and German-language writers and artists. The English Gothic novel also led to new novel types such as the German "Schauerroman" and the French "roman noir". The novel usually regarded as the first Gothic novel is "The Castle of Otranto" by English author Horace Walpole, which was first published in 1764. Walpole's declared aim was to combine elements of the medieval romance, which he deemed too fanciful, and the modern novel, which he considered to be too confined to strict realism. The basic plot created many other staple Gothic generic traits, including threatening mysteries and ancestral curses, as well as countless trappings such as hidden passages and oft-fainting heroines. Walpole published the first edition disguised as a medieval romance from Italy discovered and republished by a fictitious translator. When Walpole admitted to his authorship in the second edition, its originally favourable reception by literary reviewers changed into rejection. The reviewers' rejection reflected a larger cultural bias: the romance was usually held in contempt by the educated as a tawdry and debased kind of writing; the genre had gained some respectability only through the works of Samuel Richardson and Henry Fielding. A romance with superstitious elements, and moreover void of didactical intention, was considered a setback and not acceptable. Walpole's forgery, together with the blend of history and fiction, contravened the principles of the Enlightenment and associated the Gothic novel with fake documentation. Clara Reeve, best known for her work "The Old English Baron" (1778), set out to take Walpole's plot and adapt it to the demands of the time by balancing fantastic elements with 18th-century realism. In her preface, Reeve wrote: "This Story is the literary offspring of "The Castle of Otranto", written upon the same plan, with a design to unite the most attractive and interesting circumstances of the ancient Romance and modern Novel." The question now arose whether supernatural events that were not as evidently absurd as Walpole's would not lead the simpler minds to believe them possible. Reeve's contribution in the development of the Gothic fiction, therefore, can be demonstrated on at least two fronts. In the first, there is the reinforcement of the Gothic narrative framework, one that focuses on expanding the imaginative domain so as to include the supernatural without losing the realism that marks the novel that Walpole pioneered. Secondly, Reeve also sought to contribute to finding the appropriate formula to ensure that the fiction is believable and coherent. The result is that she spurned specific aspects to Walpole's style such as his tendency to incorporate too much humor or comic elements in such a way that it diminishes the Gothic tale's ability to induce fear. In 1777, Reeve enumerated Walpole's excesses in this respect: a sword so large as to require an hundred men to lift it; a helmet that by its own weight forces a passage through a court-yard into an arched vault, big enough for a man to go through; a picture that walks out of its frame; a skeleton ghost in a hermit's cowl...Although the succession of Gothic writers did not exactly heed Reeve's focus on emotional realism, she was able to posit a framework that keeps Gothic fiction within the realm of the probable. This aspect remains a challenge for authors in this genre after the publication of The Old English Baron. Outside of its providential context, the supernatural would often suffer the risk of veering towards the absurd. Ann Radcliffe developed the technique of the "explained supernatural" in which every seemingly supernatural intrusion is eventually traced back to natural causes. Radcliffe has been called both “the Great Enchantress” and “Mother Radcliffe” due to her influence on both Gothic literature and the female Gothic. Radcliffe's use of visual elements and their effects constitutes an innovative strategy for reading the world through “linguistic visual patterns” and developing an “ethical gaze”, allowing for readers to visualize the events through words, understand the situations, and feel the terror which the characters themselves are experiencing. Her success attracted many imitators. Among other elements, Ann Radcliffe introduced the brooding figure of the Gothic villain ("A Sicilian Romance" in 1790), a literary device that would come to be defined as the Byronic hero. Radcliffe's novels, above all "The Mysteries of Udolpho" (1794), were best-sellers. However, along with most novels at the time, they were looked down upon by many well-educated people as sensationalist nonsense. Radcliffe also inspired the emerging idea of "Gothic feminism", which she expressed through the idea of female power through pretended and staged weakness. The establishment of this idea began the movement of the female gothic to be "challenging… the concept of gender itself". Radcliffe also provided an aesthetic for the genre in an influential article "On the Supernatural in Poetry", examining the distinction and correlation between horror and terror in Gothic fiction, utilizing the uncertainties of terror in her works to produce a model of the uncanny. Combining experiences of terror and wonder with visual description was a technique that pleased readers and set Radcliffe apart from other Gothic writers. At least two Gothic authors utilize the literary concept of translation as a framing device for their novels. Ann Radcliffe's Gothic novel "The Italian" boasts a weighty framing, wherein her narrator claims that the story the reader is about to hear has been recorded and translated from a manuscript entrusted to an Italian man by a close friend who overheard the story confessed in a church. Radcliffe uses this translational framing to evidence how her extraordinary story has traveled to the reader. In the fictitious preface to his Gothic novel "The Castle of Otranto", Horace Walpole claims his story was produced in Italy, recorded in German, then discovered and translated in English. Walpole's story of transnational translation lends his novel an air of tempting exoticism that is highly characteristic of the Gothic genre. Romantic literary movements developed in continental Europe concurrent with the development of the Gothic novel. The "roman noir" ("black novel") appeared in France, by such writers as François Guillaume Ducray-Duminil, Baculard d'Arnaud and Madame de Genlis. In Germany, the "Schauerroman" ("shudder novel") gained traction with writers as Friedrich Schiller, with novels like "The Ghost-Seer" (1789), and Christian Heinrich Spiess, with novels like "Das Petermännchen" (1791/92). These works were often more horrific and violent than the English Gothic novel. English novelist's Matthew Lewis' lurid tale of monastic debauchery, black magic and diabolism entitled "The Monk" (1796) brought the continental "horror" mode to England. Lewis's portrayal of depraved monks, sadistic inquisitors and spectral nuns—and his scurrilous view of the Catholic Church—appalled some readers, but "The Monk" was important in the genre's development. "The Monk" also influenced Ann Radcliffe in her last novel, "The Italian" (1797). In this book, the hapless protagonists are ensnared in a web of deceit by a malignant monk called Schedoni and eventually dragged before the tribunals of the Inquisition in Rome, leading one contemporary to remark that if Radcliffe wished to transcend the horror of these scenes, she would have to visit hell itself. The Marquis de Sade used a subgothic framework for some of his fiction, notably "The Misfortunes of Virtue" and "Eugenie de Franval", though the Marquis himself never thought of his like this. Sade critiqued the genre in the preface of his "Reflections on the novel" (1800) stating that the Gothic is "the inevitable product of the revolutionary shock with which the whole of Europe resounded". Contemporary critics of the genre also noted the correlation between the French Revolutionary Terror and the "terrorist school" of writing represented by Radcliffe and Lewis. Sade considered "The Monk" to be superior to the work of Ann Radcliffe. German gothic fiction is usually described by the term "Schauerroman" ("shudder novel"). However, genres of "Gespensterroman"/"Geisterroman" ("ghost novel"), "Räuberroman" ("robber novel"), and "Ritterroman" ("chivalry novel") also frequently share plot and motifs with the British "gothic novel". As its name suggests, the "Räuberroman" focuses on the life and deeds of outlaws, influenced by Friedrich von Schiller's drama "The Robbers" (1781). Heinrich Zschokke's "Abällino, der grosse Bandit" (1793) was translated into English by M.G. Lewis as "The Bravo of Venice" in 1804. The "Ritterroman" focuses on the life and deeds of the knights and soldiers, but features many elements found in the gothic novel, such as magic, secret tribunals, and medieval setting. Benedikte Naubert's novel "Hermann of Unna" (1788) is seen as being very close to the "Schauerroman" genre. While the term "Schauerroman" is sometimes equated with the term "Gothic novel", this is only partially true. Both genres are based on the terrifying side of the Middle Ages, and both frequently feature the same elements (castles, ghost, monster, etc.). However, "Schauerroman"'s key elements are necromancy and secret societies and it is remarkably more pessimistic than the British Gothic novel. All those elements are the basis for Friedrich von Schiller's unfinished novel "The Ghost-Seer" (1786–1789). The motive of secret societies is also present in the Karl Grosse's "Horrid Mysteries" (1791–1794) and Christian August Vulpius's" Rinaldo Rinaldini, the Robber Captain" (1797). Other early authors and works included Christian Heinrich Spiess, with his works "Das Petermännchen" (1793), "Der alte Überall und Nirgends" (1792), "Die Löwenritter" (1794), and "Hans Heiling, vierter und letzter Regent der Erd- Luft- Feuer- und Wasser-Geister" (1798); Heinrich von Kleist's short story "Das Bettelweib von Locarno" (1797); and Ludwig Tieck's "Der blonde Eckbert" (1797) and "Der Runenberg" (1804). Early examples of female-authored Gothic include Sophie Albrecht's "Das höfliche Gespenst" (1797) and "Graumännchen oder die Burg Rabenbühl: eine Geistergeschichte altteutschen Ursprungs" (1799). During the next two decades, the most famous author of Gothic literature in Germany was polymath E. T. A. Hoffmann. His novel "The Devil's Elixirs" (1815) was influenced by Lewis's novel "The Monk", and even mentions it during the book. The novel also explores the motive of doppelgänger, the term coined by another German author (and supporter of Hoffmann), Jean Paul in his humorous novel "Siebenkäs" (1796–1797). He also wrote an opera based on the Friedrich de la Motte Fouqué's Gothic story "Undine", with de la Motte Fouqué himself writing the libretto. Aside from Hoffmann and de la Motte Fouqué, three other important authors from the era were Joseph Freiherr von Eichendorff ("The Marble Statue", 1819), Ludwig Achim von Arnim ("Die Majoratsherren", 1819), and Adelbert von Chamisso ("Peter Schlemihls wundersame Geschichte", 1814). After them, Wilhelm Meinhold wrote "The Amber Witch" (1838) and "Sidonia von Bork" (1847). Also writing in the German language, Jeremias Gotthelf wrote "The Black Spider" (1842), an allegorical work that used Gothic themes. The last work from German writer Theodor Storm, "The Rider on the White Horse" (1888), also uses Gothic motives and themes. In the beginning of the 20th century, many German authors wrote works influenced by "Schauerroman", including Hanns Heinz Ewers. Russian Gothic was not, until the 1990s, viewed as a genre or label by Russian critics. If used, the word "gothic" was used to describe (mostly early) works of Fyodor Dostoyevsky. Most critics simply used tags such as "Romanticism" and "fantastique", such as in the 1984 story collection translated into English as "Russian 19th-Century Gothic Tales ", but originally titled "Фантастический мир русской романтической повести", literally, “The Fantastic World of Russian Romanticism Short Story/Novella”. However, since the mid-1980s, Russian gothic fiction as a genre began to be discussed in books such as "The Gothic-Fantastic in Nineteenth-Century Russian Literature", "European Gothic: A Spirited Exchange 1760–1960", "The Russian Gothic novel and its British antecedents" and "Goticheskiy roman v Rossii (The Gothic Novel in Russia)". The first Russian author whose work has been described as gothic fiction is considered to be Nikolay Mikhailovich Karamzin. While many of his works feature gothic elements, the first considered to belong purely under the gothic fiction label is "Ostrov Borngolm" ("Island of Bornholm") from 1793. Then, nearly 10 years later, Nikolay Ivanovich Gnedich followed suit with his 1803 novel "Don Corrado de Gerrera", set in Spain during the reign of Philip II. The term "gothic" is sometimes also used to describe the ballads of Vasily Andreyevich Zhukovsky, particularly "Ludmila" (1808) and "Svetlana" (1813). The following poems are also now considered to belong to the gothic genre: Meshchevskiy's "Lila", Katenin's "Olga", Pushkin's "The Bridegroom", Pletnev's "The Gravedigger" and Lermontov's "Demon". The other authors of romanticism's era include: Antony Pogorelsky (penname of Alexey Alexeyevich Perovsky), Orest Somov, Oleksa Storozhenko, Alexandr Pushkin, Nikolai Alekseevich Polevoy, Mikhail Lermontov (for his work "Stuss"), and Alexander Bestuzhev-Marlinsky. Pushkin is particularly important, as his 1833 short story "The Queen of Spades" was so popular, it was adapted into operas and later, movies by both Russian and foreign artists. Some parts of Mikhail Yuryevich Lermontov's "A Hero of Our Time" (1840) are also considered to belong in the gothic genre, but they lack the supernatural elements of other Russian gothic stories. The key author of the transition from romanticism to realism, Nikolai Vasilievich Gogol, who was also one of the most important authors of romanticism, produced a number of works which qualify as gothic fiction. Each of his three short story collections, feature a number of stories that fall within the gothic genre, as well as many stories that contain gothic elements. This includes: "St John's Eve" and "A Terrible Vengeance" from "Evenings on a Farm Near Dikanka" (1831–1832); "The Portrait" from "Arabesques" (1835); and "Viy" from "Mirgorod" (1835). While all are well-known, the latter is probably the most famous, having inspired at least eight movie adaptations (two now considered lost), one animated movie, two documentaries, as well as a video game. Gogol's work differs from western European gothic fiction as his cultural influences drew from Ukrainian folklore, Cossack lifestyle and, being a very religious man, Orthodox Christianity. Other relevant authors of Gogol's era include Vladimir Fyodorovich Odoevsky ("The Living Corpse", written 1838, published 1844; "The Ghost"; "The Sylphide"; as well as short stories), Count Aleksey Konstantinovich Tolstoy ("The Family of the Vourdalak", 1839, and "The Vampire", 1841), Mikhail Zagoskin ("Unexpected Guests"), Józef Sękowski/Osip Senkovsky ("Antar"), and Yevgeny Baratynsky ("The Ring"). After Gogol, Russian literature saw the rise of realism, but many authors continued to write stories that ranged within gothic fiction territory. Ivan Sergeyevich Turgenev, one of the most celebrated realists, wrote "Faust" (1856), "Phantoms" (1864), "Song of the Triumphant Love" (1881), and "Clara Milich" (1883). Another classic Russian realist, Fyodor Mikhailovich Dostoyevsky, incorporated gothic elements in many of his works, although none of his novels are seen as purely gothic. Grigory Petrovich Danilevsky, who wrote historical and early science fiction novels and stories, wrote "Mertvec-ubiytsa" ("Dead Murderer") in 1879. Also, Grigori Alexandrovich Machtet wrote the story "Zaklyatiy kazak", which may now also be considered gothic. During the last years of the Russian Empire in the early 20th century, many authors continued to write in the gothic fiction genre. These include historian and historical fiction writer Alexander Valentinovich Amfiteatrov; Leonid Nikolaievich Andreyev, who developed psychological characterization; symbolist Valery Yakovlevich Bryusov; Alexander Grin; Anton Pavlovich Chekhov; and Aleksandr Ivanovich Kuprin. Nobel Prize winner Ivan Alekseyevich Bunin wrote "Dry Valley" (1912), which is considered to be influenced by gothic literature. In her monograph on the subject, Muireann Maguire writes, "The centrality of the Gothic-fantastic to Russian fiction is almost impossible to exaggerate, and certainly exceptional in the context of world literature." Further contributions to the Gothic genre were seen in the work of the Romantic poets. Prominent examples include Samuel Taylor Coleridge's "The Rime of the Ancient Mariner" and "Christabel" as well as John Keats' "La Belle Dame sans Merci" (1819) and "Isabella, or the Pot of Basil" (1820) which feature mysteriously fey ladies. In the latter poem the names of the characters, the dream visions and the macabre physical details are influenced by the novels of premiere Gothicist Ann Radcliffe. Percy Bysshe Shelley's first published work was the Gothic novel "Zastrozzi" (1810), about an outlaw obsessed with revenge against his father and half-brother. Shelley published a second Gothic novel in 1811, "St. Irvyne; or, The Rosicrucian", about an alchemist who seeks to impart the secret of immortality. The poetry, romantic adventures, and character of Lord Byron—characterised by his spurned lover Lady Caroline Lamb as "mad, bad and dangerous to know"—were another inspiration for the Gothic, providing the archetype of the Byronic hero. Byron features as the title character in Lady Caroline's own Gothic novel "Glenarvon" (1816). Byron was also the host of the celebrated ghost-story competition involving himself, Percy Bysshe Shelley, Mary Shelley, and John William Polidori at the Villa Diodati on the banks of Lake Geneva in the summer of 1816. This occasion was productive of both Mary Shelley's "Frankenstein" (1818) and Polidori's "The Vampyre" (1819), featuring the Byronic Lord Ruthven. "The Vampyre" has been accounted by cultural critic Christopher Frayling as one of the most influential works of fiction ever written and spawned a craze for vampire fiction and theatre (and latterly film) which has not ceased to this day. Mary Shelley's novel, though clearly influenced by the Gothic tradition, is often considered the first science fiction novel, despite the omission in the novel of any scientific explanation of the monster's animation and the focus instead on the moral issues and consequences of such a creation. A late example of traditional Gothic is "Melmoth the Wanderer" (1820) by Charles Maturin, which combines themes of anti-Catholicism with an outcast Byronic hero. Jane C. Loudon's "The Mummy!" (1827) features standard Gothic motifs, characters, and plotting, but with one significant twist: it is set in the twenty-second century and speculates on fantastic scientific developments that might have occurred four hundred years in the future, thus making it one of the earliest examples, along with "Frankenstein", of the science fiction genre developing from Gothic traditions. By the Victorian era, Gothic had ceased to be the dominant genre, and was dismissed by most critics. (Indeed, the form's popularity as an established genre had already begun to erode with the success of the historical romance popularised by Sir Walter Scott.) However, in many ways, it was now entering its most creative phase. By the early 1800s readers and critics began to reconsider a number of previously overlooked Penny Blood or "penny dreadful" serial fictions by such authors as George W. M. Reynolds who wrote a trilogy of Gothic horror novels: "Faust" (1846), "Wagner the Wehr-wolf" (1847) and "The Necromancer" (1857). Reynolds was also responsible for "The Mysteries of London" which has been accorded an important place in the development of the urban as a particularly Victorian Gothic setting, an area within which interesting links can be made with established readings of the work of Dickens and others. Another famous penny dreadful of this era was the anonymously authored "Varney the Vampire" (1847). "Varney" is the tale of the vampire Sir Francis Varney, and introduced many of the tropes present in vampire fiction recognizable to modern audiences — it was the first story to refer to sharpened teeth for a vampire. The formal relationship between these fictions, serialised for predominantly working class audiences, and the roughly contemporaneous sensation fictions serialised in middle class periodicals is also an area worthy of inquiry. An important and innovative reinterpreter of the Gothic in this period was Edgar Allan Poe. Poe focused less on the traditional elements of gothic stories and more on the psychology of his characters as they often descended into madness. Poe's critics complained about his "German" tales, to which he replied, 'that terror is not of Germany, but of the soul'. Poe, a critic himself, believed that terror was a legitimate literary subject. His story "The Fall of the House of Usher" (1839) explores these 'terrors of the soul' while revisiting classic Gothic tropes of aristocratic decay, death, and madness. The legendary villainy of the Spanish Inquisition, previously explored by Gothicists Radcliffe, Lewis, and Maturin, is based on a true account of a survivor in "The Pit and the Pendulum" (1842). The influence of Ann Radcliffe is also detectable in Poe's "The Oval Portrait" (1842), including an honorary mention of her name in the text of the story. The influence of Byronic Romanticism evident in Poe is also apparent in the work of the Brontë sisters. Emily Brontë's "Wuthering Heights" (1847) transports the Gothic to the forbidding Yorkshire Moors and features ghostly apparitions and a Byronic hero in the person of the demonic Heathcliff. The Brontës' fiction is seen by some feminist critics as prime examples of Female Gothic, exploring woman's entrapment within domestic space and subjection to patriarchal authority and the transgressive and dangerous attempts to subvert and escape such restriction. Emily's Cathy and Charlotte Brontë's "Jane Eyre" are both examples of female protagonists in such a role. Louisa May Alcott's Gothic potboiler, "A Long Fatal Love Chase" (written in 1866, but published in 1995) is also an interesting specimen of this subgenre. Elizabeth Gaskell's tales "The Doom of the Griffiths" (1858) "Lois the Witch", and "The Grey Woman" all employ one of the most common themes of Gothic fiction, the power of ancestral sins to curse future generations, or the fear that they will. The genre was also a heavy influence on more mainstream writers, such as Charles Dickens, who read Gothic novels as a teenager and incorporated their gloomy atmosphere and melodrama into his own works, shifting them to a more modern period and an urban setting, including "Oliver Twist" (1837–8), "Bleak House" (1854) (Mighall 2003) and "Great Expectations" (1860–61). These pointed to the juxtaposition of wealthy, ordered and affluent civilisation next to the disorder and barbarity of the poor within the same metropolis. Bleak House in particular is credited with seeing the introduction of urban fog to the novel, which would become a frequent characteristic of urban Gothic literature and film (Mighall 2007). His most explicitly Gothic work is his last novel, "The Mystery of Edwin Drood," which he did not live to complete and which was published in unfinished state upon his death in 1870. The mood and themes of the Gothic novel held a particular fascination for the Victorians, with their morbid obsession with mourning rituals, mementos, and mortality in general. The 1880s saw the revival of the Gothic as a powerful literary form allied to fin de siecle, which fictionalized contemporary fears like ethical degeneration and questioned the social structures of the time. Classic works of this Urban Gothic include Robert Louis Stevenson's "Strange Case of Dr Jekyll and Mr Hyde" (1886), Oscar Wilde's "The Picture of Dorian Gray" (1891), George du Maurier's "Trilby" (1894), Richard Marsh's "The Beetle" (1897), Henry James' "The Turn of the Screw" (1898), and the stories of Arthur Machen. Some of the works of Canadian writer Gilbert Parker also fall into the genre, including the stories in "The Lane that Had No Turning" (1900). The most famous Gothic villain ever, Count Dracula, was created by Bram Stoker in his novel "Dracula" (1897). Stoker's book also established Transylvania and Eastern Europe as the "locus classicus" of the Gothic. Gaston Leroux's serialized novel "The Phantom of the Opera" (1909–1910) is another well-known example of gothic fiction from the early 20th century. In America, two notable writers of the end of the 19th century, in the Gothic tradition, were Ambrose Bierce and Robert W. Chambers. Bierce's short stories were in the horrific and pessimistic tradition of Poe. Chambers, though, indulged in the decadent style of Wilde and Machen, even to the extent of his inclusion of a character named 'Wilde' in his "The King in Yellow." In Ireland, Gothic fiction tended to be the purveyance of the Anglo-Irish Protestant Ascendancy. According to literary critic Terry Eagleton, Charles Maturin, Sheridan Le Fanu, and Bram Stoker form the core of the Irish gothic sub-genre with stories featuring castles set in a barren landscape and a cast of remote aristocrats dominating an atavistic peasantry, which represent in allegorical form the political plight of colonial Ireland subjected to the Protestant Ascendancy. Le Fanu's use of the gloomy villain, forbidding mansion, and persecuted heroine in "Uncle Silas" (1864) shows the direct influence of both Walpole's "Otranto" and Radcliffe's "Udolpho". Le Fanu's short story collection "In a Glass Darkly" (1872) includes the superlative vampire tale "Carmilla", which provided fresh blood for that particular strand of the Gothic and influenced Bram Stoker's vampire novel "Dracula" (1897). Female Anglo-Irish authors also wrote Gothic fiction in the 19th-century including Sydney Owenson, most famous for "The Wild Irish Girl", and Regina Maria Roche, whose novel "Clermont" was satirized by Jane Austen in "Northanger Abbey". Irish Catholics also wrote Gothic fiction in the 19th century, so although the Anglo-Irish dominated and defined the sub-genre, they did not own it. Irish Catholic Gothic writers included Gerald Griffin, James Clarence Mangan, and John and Michael Banim. William Carleton was a notable Gothic writer, but he converted from Catholicism to Anglicanism during his life, which complicates his position in this dichotomy. The conventions of Gothic literature were not invented in the eighteenth century by Horace Walpole. The components that would eventually combine into Gothic literature had a rich history by the time Walpole presented a fictitious medieval manuscript in "The Castle of Otranto" in 1764. Gothic literature is often described with words such as "wonder" and "terror." This sense of wonder and terror, which provides the suspension of disbelief so important to the Gothic—which, except for when it is parodied, even for all its occasional melodrama, is typically played straight, in a self-serious manner—requires the imagination of the reader to be willing to accept the idea that there might be something "beyond that which is immediately in front of us." The mysterious imagination necessary for Gothic literature to have gained any traction had been growing for some time before the advent of the Gothic. The necessity for this came as the known world was beginning to become more explored, reducing the inherent geographical mysteries of the world. The edges of the map were being filled in, and no one was finding any dragons. The human mind required a replacement. Clive Bloom theorizes that this void in the collective imagination was critical in the development of the cultural possibility for the rise of the Gothic tradition. The setting of most early Gothic works was a medieval one, but this had been a common theme long before Walpole. In Britain especially, there was a desire to reclaim a shared past. This obsession frequently led to extravagant architectural displays, and sometimes mock tournaments were held. It was not merely in literature that a medieval revival made itself felt, and this too contributed to a culture ready to accept a perceived medieval work in 1764. The Gothic often uses scenery of decay, death, and morbidity to achieve its effects (especially in the Italian Horror school of Gothic). However, Gothic literature was not the origin of this tradition; indeed it was far older. The corpses, skeletons, and churchyards so commonly associated with the early Gothic were popularized by the Graveyard Poets, and were also present in novels such as Daniel Defoe's Journal of the Plague Year, which contains comical scenes of plague carts and piles of plague corpses. Even earlier, poets like Edmund Spenser evoked a dreary and sorrowful mood in such poems as Epithalamion. All of the aspects of pre-Gothic literature mentioned above occur to some degree in the Gothic, but even taken together, they still fall short of true Gothic. What was lacking was an aesthetic, which would serve to tie the elements together. Bloom notes that this aesthetic must take the form of a theoretical or philosophical core, which is necessary to "sav[e] the best tales from becoming mere anecdote or incoherent sensationalism." In this particular case, the aesthetic needed to be an emotional one, which was finally provided by Edmund Burke's 1757 work, "A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and the Beautiful", which "finally codif[ied] the gothic emotional experience." Specifically, Burke's thoughts on the Sublime, Terror, and Obscurity were most applicable. These sections can be summarized thus: the Sublime is that which is or produces the "strongest emotion which the mind is capable of feeling,"; the Sublime is most often evoked by Terror; and to cause Terror we need some amount of Obscurity—we can't know everything about that which is inducing Terror—or else "a great deal of the apprehension vanishes"; Obscurity is necessary in order to experience the Terror of the unknown. Bloom asserts that Burke's descriptive vocabulary was essential to the Romantic works that eventually informed the Gothic. The birth of the Gothic was thought to be influenced by political upheaval beginning. Researchers linked its birth with the English Civil War and culminating in a Jacobite rebellion (1745) more recent to the first Gothic novel (1764). A collective political memory and any deep cultural fears associated with it likely contributed to early Gothic villain characters as literary representatives of defeated Tory barons or Royalists "rising" from their political graves in the pages of the early Gothic to terrorize the bourgeois reader of late eighteenth-century England. The excesses, stereotypes, and frequent absurdities of the traditional Gothic made it rich territory for satire. The most famous parody of the Gothic is Jane Austen's novel "Northanger Abbey" (1818) in which the naive protagonist, after reading too much Gothic fiction, conceives herself a heroine of a Radcliffian romance and imagines murder and villainy on every side, though the truth turns out to be much more prosaic. Jane Austen's novel is valuable for including a list of early Gothic works since known as the Northanger Horrid Novels. These books, with their lurid titles, were once thought to be the creations of Jane Austen's imagination, though later research by Michael Sadleir and Montague Summers confirmed that they did actually exist and stimulated renewed interest in the Gothic. They are currently all being reprinted. Another example of Gothic parody in a similar vein is "The Heroine" by Eaton Stannard Barrett (1813). Cherry Wilkinson, a fatuous female protagonist with a history of novel-reading, fancies herself as the heroine of a Gothic romance. She perceives and models reality according to the stereotypes and typical plot structures of the Gothic novel, leading to a series of absurd events culminating in catastrophe. After her downfall, her affectations and excessive imaginations become eventually subdued by the voice of reason in the form of Stuart, a paternal figure, under whose guidance the protagonist receives a sound education and correction of her misguided taste. Notable English 20th-century writers in the Gothic tradition include Algernon Blackwood, William Hope Hodgson, M. R. James, Hugh Walpole, and Marjorie Bowen. In America pulp magazines such as "Weird Tales" reprinted classic Gothic horror tales from the previous century, by such authors as Poe, Arthur Conan Doyle, and Edward Bulwer-Lytton and printed new stories by modern authors featuring both traditional and new horrors. The most significant of these was H. P. Lovecraft who also wrote a conspectus of the Gothic and supernatural horror tradition in his "Supernatural Horror in Literature" (1936) as well as developing a Mythos that would influence Gothic and contemporary horror well into the 21st century. Lovecraft's protégé, Robert Bloch, contributed to "Weird Tales" and penned "Psycho" (1959), which drew on the classic interests of the genre. From these, the Gothic genre "per se" gave way to modern horror fiction, regarded by some literary critics as a branch of the Gothic although others use the term to cover the entire genre. In the 20th century, Gothic fiction and Modernism influenced each other. This is often most evident in detective fiction, horror fiction, and science fiction, but the influence of the Gothic can also be seen in the high literary modernism of the 20th-century, as well. Oscar Wilde's The Picture of Dorian Gray, published in 1890, initiates the re-working of older literary forms and myths that becomes common in the work of W. B. Yeats, T. S. Eliot, and James Joyce, among others. In Joyce's "Ulysses", the living are transformed into ghosts, which points to an Ireland in stasis at the time, but also a history of cycles of trauma from the Great Famine in the 1840s through to the current moment of the text. The way "Ulysses" uses tropes of the Gothic such as ghosts and hauntings while removing the literally supernatural elements of 19th-century Gothic fiction is indicative of the general form of modernist gothic writing in the first half of the 20th-century. Gothic Romances of this description became popular during the 1950s, 1960s, and 1970s, with authors such as Phyllis A. Whitney, Joan Aiken, Dorothy Eden, Victoria Holt, Barbara Michaels, Mary Stewart, and Jill Tattersall. Many featured covers depicting a terror-stricken woman in diaphanous attire in front of a gloomy castle, often with a single lit window. Many were published under the Paperback Library Gothic imprint and were marketed to a female audience. Though the authors were mostly women, some men wrote Gothic romances under female pseudonyms. For instance the prolific Clarissa Ross and Marilyn Ross were pseudonyms for the male Dan Ross, and Frank Belknap Long published Gothics under his wife's name, Lyda Belknap Long. Another example is British writer Peter O'Donnell, who wrote under the pseudonym Madeleine Brent. Outside of imprints like Love Spell, who discontinued publishing in 2010, very few books seem to be published using the term today. The genre also influenced American writing to create the Southern Gothic genre, which combines some Gothic sensibilities (such as the grotesque) with the setting and style of the Southern United States. Examples include Erskine Caldwell, William Faulkner, Carson McCullers, John Kennedy Toole, Manly Wade Wellman, Eudora Welty, Tennessee Williams, Truman Capote, Flannery O'Connor, Davis Grubb, Anne Rice and Harper Lee. Contemporary American writers in this tradition include Joyce Carol Oates, in such novels as "Bellefleur" and "A Bloodsmoor Romance" and short story collections such as "Night-Side" (Skarda 1986b) and Raymond Kennedy in his novel "Lulu Incognito." The Southern Ontario Gothic applies a similar sensibility to a Canadian cultural context. Robertson Davies, Alice Munro, Barbara Gowdy, Timothy Findley and Margaret Atwood have all produced works that are notable exemplars of this form. Another writer in this tradition was Henry Farrell, whose best-known work was the 1960 Hollywood horror novel "What Ever Happened To Baby Jane?" Farrell's novels spawned a subgenre of "Grande Dame Guignol" in the cinema, represented by such films as the 1962 film based on Farrell's novel, which starred Bette Davis versus Joan Crawford; this subgenre of films was dubbed the "psycho-biddy" genre. Many modern writers of horror (or indeed other types of fiction) exhibit considerable Gothic sensibilities—examples include the works of Anne Rice, Stella Coulson, Susan Hill, Poppy Z. Brite and Neil Gaiman, as well as some of the sensationalist works of Stephen King. Thomas M. Disch's novel "The Priest" (1994) was subtitled "A Gothic Romance", and was partly modelled on Matthew Lewis' "The Monk".
https://en.wikipedia.org/wiki?curid=12622
Gospel Gospel originally meant the Christian message, but in the 2nd century it came to be used also for the books in which the message was set out; in this sense it includes both the four canonical gospels and various apocryphal gospels dating from the 2nd century and later. The four canonical gospels of Matthew, Mark, Luke and John comprise the first four books of the New Testament of the Bible and were probably written between AD 66 and 110. All four were anonymous (the modern names were added in the 2nd century), almost certainly none were by eyewitnesses, and all are the end-products of long oral and written transmission. They are a subset of the genre of ancient biography, but ancient biographies should not be confused with modern ones, and often included propaganda and "kerygma" (preaching); yet while there is no guarantee that the events which they describe are historically accurate, scholars following the quest for the historical Jesus believe that it is possible to differentiate Jesus' own views from those of his later followers. Many non-canonical gospels were also written, all later than the four canonical gospels, and like them advocating the particular theological views of their various authors. A gospel can be defined as a loose-knit, episodic narrative of the words and deeds of Jesus of Nazareth. The four canonical gospels share the same basic outline: Jesus begins his public ministry in conjunction with that of John the Baptist, calls disciples, teaches and heals and confronts the Pharisees, dies on the cross, and is raised from the dead. Despite this, scholars recognise that their differences of detail are irreconcilable, and any attempt to harmonise them would only disrupt their distinct theological messages. John and the three synoptics in particular present significantly different pictures of Jesus's career, with John omitting any mention of his ancestry, birth, and childhood, his baptism, temptation and transfiguration, and the Lord's Supper. John chronology and arrangement of incidents is also distinctly different, clearly describing the passage of three years in Jesus's ministry in contrast to the single year of the synoptics, placing the cleansing of the Temple at the beginning rather than at the end, and the Last Supper on the day before Passover instead of being a Passover meal. Each gospel has its own distinctive understanding of Jesus and his divine role. Mark never calls him "God" or claims that he existed prior to his earthly life, apparently believes that Jesus had a normal human parentage and birth, and makes no attempt to trace his ancestry back to King David or Adam. Crucially, Mark originally had no post-resurrection appearances of Jesus, although Mark 16:7, in which the young man discovered in the tomb instructs the women to tell "the disciples and Peter" that Jesus will see them again in Galilee, hints that the author knew of the tradition. Matthew makes subtle changes to Mark's narrative in order to stress Jesus's divine nature – for example, the "young man" who appears at Jesus' tomb in Mark becomes a radiant angel in Matthew. Similarly, the miracle stories in Mark confirm Jesus' status as an emissary of God (which was Mark's understanding of the Messiah), but in Matthew they demonstrate divinity. Luke, while following Mark's plot more faithfully than does Matthew, has expanded on the source, corrected Mark's grammar and syntax, and eliminated some passages entirely, notably most of chapters 6 and 7. John, the most overtly theological, is the first to make Christological judgements outside the context of the narrative of Jesus's life. The synoptic gospels represent Jesus as an exorcist and healer who preached in parables about the coming Kingdom of God. He preached first in Galilee and later in Jerusalem, where he cleansed the temple. He states that he offers no sign as proof (Mark) or only the sign of Jonah (Matthew and Luke). In Mark, apparently written with a Roman audience in mind, Jesus is a heroic man of action, given to powerful emotions, including agony. In Matthew, apparently written for a Jewish audience, Jesus is repeatedly described as the fulfillment of Hebrew prophecy. In Luke, apparently written for gentiles, Jesus is especially concerned with the poor. Luke emphasizes the importance of prayer and the action of the Holy Spirit in Jesus's life and in the Christian community. Jesus appears as a stoic supernatural being, unmoved even by his own crucifixion. Like Matthew, Luke insists that salvation offered by Christ is for all, and not only for the Jews. The Gospel of John is the only gospel to call Jesus God, and in contrast to Mark, where Jesus hides his identity as messiah, in John he openly proclaims it. It represents Jesus as an incarnation of the eternal Word (Logos) who talked extensively about himself, records no parables spoken by him, and does not explicitly refer to a Second Coming. Jesus preaches in Jerusalem, launching his ministry with the cleansing of the temple. It records his performance of several miracles as signs, most of them not found in the synoptics. The Gospel of John ends:(21:25) "And there are also many other things which Jesus did, the which, if they should be written every one, I suppose that even the world itself could not contain the books that should be written. Amen." Like the rest of the New Testament, the four gospels were written in Greek. The Gospel of Mark probably dates from c. AD 66–70, Matthew and Luke around AD 85–90, and John AD 90–110. Despite the traditional ascriptions all four are anonymous, and most scholars agree that none were written by eyewitnesses. (A few conservative scholars defend the traditional ascriptions or attributions, but for a variety of reasons the majority of scholars have abandoned this view or hold it only tenuously.) In the immediate aftermath of Jesus' death his followers expected him to return at any moment, certainly within their own lifetimes, and in consequence there was little motivation to write anything down for future generations, but as eyewitnesses began to die, and as the missionary needs of the church grew, there was an increasing demand and need for written versions of the founder's life and teachings. The stages of this process can be summarised as follows: Mark is generally agreed to be the first gospel; it uses a variety of sources, including conflict stories (Mark 2:1–3:6), apocalyptic discourse (4:1–35), and collections of sayings, although not the sayings gospel known as the Gospel of Thomas and probably not the Q source used by Matthew and Luke. The authors of Matthew and Luke, acting independently, used Mark for their narrative of Jesus's career, supplementing it with the collection of sayings called the Q document and additional material unique to each called the M source (Matthew) and the L source (Luke). Mark, Matthew and Luke are called the synoptic gospels because of the close similarities between them in terms of content, arrangement, and language. The authors and editors of John may have known the synoptics, but did not use them in the way that Matthew and Luke used Mark. There is a near-consensus that this gospel had its origins as a "signs" source (or gospel) that circulated within the Johannine community (the community that produced John and the three epistles associated with the name), later expanded with a Passion narrative and a series of discourses. All four also use the Jewish scriptures, by quoting or referencing passages, or by interpreting texts, or by alluding to or echoing biblical themes. Such use can be extensive: Mark's description of the Parousia (second coming) is made up almost entirely of quotations from scripture. Matthew is full of quotations and allusions, and although John uses scripture in a far less explicit manner, its influence is still pervasive. Their source was the Greek version of the scriptures, called the Septuagint – they do not seem familiar with the original Hebrew. The consensus among modern scholars is that the gospels are a subset of the ancient genre of "bios", or ancient biography. Ancient biographies were concerned with providing examples for readers to emulate while preserving and promoting the subject's reputation and memory; the gospels were never simply biographical, they were propaganda and "kerygma" (preaching). As such, they present the Christian message of the second half of the first century AD, and as Luke's attempt to link the birth of Jesus to the census of Quirinius demonstrates, there is no guarantee that the gospels are historically accurate. The majority view among critical scholars is that the authors of Matthew and Luke have based their narratives on Mark's gospel, editing him to suit their own ends, and the contradictions and discrepancies between these three and John make it impossible to accept both traditions as equally reliable. In addition, the gospels we read today have been edited and corrupted over time, leading Origen to complain in the 3rd century that "the differences among manuscripts have become great, ... [because copyists] either neglect to check over what they have transcribed, or, in the process of checking, they make additions or deletions as they please". For these reasons modern scholars are cautious of relying on the gospels uncritically, but nevertheless they do provide a good idea of the public career of Jesus, and critical study can attempt to distinguish the original ideas of Jesus from those of the later authors. Scholars usually agree that John is not without historical value: certain of its sayings are as old or older than their synoptic counterparts, its representation of the topography around Jerusalem is often superior to that of the synoptics, its testimony that Jesus was executed before, rather than on, Passover, might well be more accurate, and its presentation of Jesus in the garden and the prior meeting held by the Jewish authorities are possibly more historically plausible than their synoptic parallels. Nevertheless, it is highly unlikely that the author had direct knowledge of events, or that his mentions of the Beloved Disciple as his source should be taken as a guarantee of his reliability. The oldest gospel text known is , a fragment of John dating from the first half of the 2nd century. The creation of a Christian canon was probably a response to the career of the heretic Marcion (c. 85–160), who established a canon of his own with just one gospel, the gospel of Luke, which he edited to fit his own theology. The Muratorian canon, the earliest surviving list of books considered (by its own author at least) to form Christian scripture, included Matthew, Mark, Luke and John. Irenaeus of Lyons went further, stating that there must be four gospels and only four because there were four corners of the Earth and thus the Church should have four pillars. Epiphanius, Jerome and other early church fathers preserve in their writings citations from Jewish-Christian gospels. Most modern critical scholars consider that the extant citations suggest at least two and probably three distinct works, at least one of which (possibly two) closely parallels the Gospel of Matthew. The Gospel of Thomas is mostly wisdom without narrating Jesus's life. The "Oxford Dictionary of the Christian Church" says that the original may date from c. 150. It may represent a tradition independent from the canonical gospels, but that developed over a long time and was influenced by Matthew and Luke. While it can be understood in Gnostic terms, it lacks the characteristic features of Gnostic doctrine. It includes two unique parables, the parable of the empty jar and the parable of the assassin. It had been lost but was discovered, in a Coptic version dating from c. 350, at Nag Hammadi in 1945–46, and three papyri, dated to c. 200, which contain fragments of a Greek text similar to but not identical with that in the Coptic language, have also been found. The Gospel of Peter was likely written in the first half of the 2nd century. It seems to be largely legendary, hostile toward Jews, and including docetic elements. It is a narrative gospel and is notable for asserting that Herod, not Pontius Pilate, ordered the crucifixion of Jesus. It had been lost but was rediscovered in the 19th century. The Gospel of Judas is another controversial and ancient text that purports to tell the story of the gospel from the perspective of Judas, the disciple who is usually said to have betrayed Jesus. It paints an unusual picture of the relationship between Jesus and Judas, in that it appears to interpret Judas's act not as betrayal, but rather as an act of obedience to the instructions of Jesus. The text was recovered from a cave in Egypt by a thief and thereafter sold on the black market until it was finally discovered by a collector who, with the help of academics from Yale and Princeton, was able to verify its authenticity. The document itself does not claim to have been authored by Judas (it is, rather, a gospel about Judas), and is known to date to at least 180 AD. The Gospel of Mary was originally written in Greek during the 2nd century. It is often interpreted as a Gnostic text. It consists mainly of dialog between Mary Magdalene and the other disciples. It is typically not considered a gospel by scholars since it does not focus on the life of Jesus. The Gospel of Barnabas was a gospel which is claimed to be written by Barnabas, one of the apostles. The Gospel was presumably written between the 14th and the 16th century. It contradicts the ministry of Jesus in canonical New Testament, but has clear parallels with the Islamic faith, by mentioning Muhammad as Messenger of God. It also strongly denies Pauline doctrine, and Jesus testified himself as a prophet, not the son of God. Marcion of Sinope, c. 150, had a much shorter version of the gospel of Luke, differing substantially from what has now become the standard text of the gospel and far less oriented towards the Jewish scriptures. Marcion is said to have rejected all other gospels, including those of Matthew, Mark and especially John, which he allegedly rejected as having been forged by Irenaeus. Marcion's critics alleged that he had edited out the portions he did not like from the then canonical version, though Marcion is said to have argued that his text was the more genuinely original one. A genre of "Infancy gospels" (Greek: "protoevangelion") arose in the 2nd century, and includes the Gospel of James, which introduces the concept of the Perpetual Virginity of Mary, and the Infancy Gospel of Thomas (not to be confused with the unrelated Gospel of Thomas), both of which related many miraculous incidents from the life of Mary and the childhood of Jesus that are not included in the canonical gospels. Another genre is that of the gospel harmony, in which the four canonical gospels are combined into a single narrative, either to present a consistent text or to produce a more accessible account of Jesus' life. The oldest known harmony, the "Diatessaron", was compiled by Tatian around 175, and may have been intended to replace the separate gospels as an authoritative text. It was accepted for liturgical purposes for as much as two centuries in Syria, but eventually developed a reputation as being heretical and was suppressed. Subsequent harmonies were written with the more limited aim of being study guides or explanatory texts. They still use all the words and only the words of the four gospels, but the possibility of editorial error, and the loss of the individual viewpoints of the separate gospels, keeps the harmony from being canonical.
https://en.wikipedia.org/wiki?curid=12627
GIMP GIMP ( ; GNU Image Manipulation Program) is a free and open-source raster graphics editor used for image retouching and editing, free-form drawing, converting between different image formats, and more specialized tasks. GIMP is released under GPLv3+ licenses and is available for Linux, macOS, and Microsoft Windows. In 1995 Spencer Kimball and Peter Mattis began developing GIMP as a semester-long project at the University of California, Berkeley for the eXperimental Computing Facility, which they named the "General Image Manipulation Program." The acronym was coined first, with the letter G being added to "IMP" as a reference to "the gimp" in a scene in the 1994 film "Pulp Fiction". In 1996 GIMP (0.54) was released as the first publicly available release. In the following year Kimball and Mattis met with Richard Stallman of the GNU Project while he visited UC Berkeley and asked if they could change "General" in the application's name to "GNU" (the name given to the operating system created by Stallman), and Stallman approved. The application subsequently formed part of the GNU software collection. The number of computer architectures and operating systems supported has expanded significantly since its first release. The first release supported UNIX systems, such as Linux, SGI IRIX and HP-UX. Since the initial release, GIMP has been ported to many operating systems, including Microsoft Windows and macOS; the original port to the Windows 32-bit platform was started by Finnish programmer Tor M. Lillqvist (tml) in 1997 and was supported in the GIMP 1.1 release. Following the first release, GIMP was quickly adopted and a community of contributors formed. The community began developing tutorials, artwork and shared better work-flows and techniques. A GUI toolkit called GTK (GIMP tool kit) was developed to facilitate the development of GIMP. GTK was replaced by its successor GTK+ after being redesigned using object-oriented programming techniques. The development of GTK+ has been attributed to Peter Mattis becoming disenchanted with the Motif toolkit GIMP originally used; Motif was used up until GIMP 0.60. GIMP 0.54 was released in January 1996. It required X11 displays, an X-server that supported the X shared memory extension and Motif 1.2 widgets. It supported 8, 15, 16 and 24-bit color depths, dithering for 8-bit displays and could view images as RGB color, grayscale or indexed color. It could simultaneously edit multiple images, zoom and pan in real-time, and supported GIF, JPEG, PNG, TIFF and XPM images. At this early stage of development GIMP could select regions using rectangle, ellipse, free, fuzzy, bezier, and intelligent selection tools, and rotate, scale, shear and flip images. It had bucket, brush and airbrush painting tools, and could clone, convolve, and blend images. It had text tools, effects filters (such as blur and edge detect), and channel and color operations (such as add, composite, decompose). The plugin system allowed for addition of new file formats and new effect filters. It supported multiple undo and redo operations. It ran on Linux 1.2.13, Solaris 2.4, HP-UX 9.05, and SGI IRIX operating systems. It was rapidly adopted by users, who created tutorials, displayed artwork and shared techniques. An early success for GIMP was the Linux penguin Tux, as drawn by Larry Ewing using GIMP 0.54. By 5 July 1996 the volume of messages posted to the mailing list had risen and the mailing list was split into two lists, gimp-developer and gimp-user. Currently, user questions are directed to the gimpnet IRC channel. GIMP 0.60 was released on 6 June 1997 using the GNU General Public License. According to the release notes, Peter Mattis was working for Hewlett-Packard and Spencer Kimball was working as a Java programmer. GIMP 0.60 no longer depended on the Motif toolkit. Improvements had been made to the painting tools, airbrush, channel operations, palettes, blend tool modes, image panning and transformation tools. The editing work flow was improved by enabling rulers, cutting and pasting between all image types, cloning between all image types and ongoing development of a layers dialog. New tools included new brushes (and a new brush file format), grayscale and RGB transparency,"Bucket fill" patterns and a pattern selection dialog, integrated paint modes, border, feather and color selectors, a pencil and eraser paint tool, gamma adjustments and a limited layer move tool. The new widgets were managed by Peter Mattis and were called GTK for GIMP toolkit and GDK for GIMP drawing kit. Sometime in 1998, after a few humorous suggestions of a gimp compile on Microsoft Windows, Tor Lilqvist began the effort of the initial port of GIMP for Windows. At the time it was considered a code fork. It would later be merged into the main development tree. Support was, and continues to be, offered through a yahoogroups email list. The biggest change in the GIMP 0.99 release was in the GIMP toolkit (GTK). GTK was redesigned to be object oriented and renamed from GTK to GTK+. The pace of development slowed when Spencer Kimball and Peter Mattis found employment. GIMP 1.0.0 was released on 2 June 1998 GIMP and GTK+ split into separate projects during the GIMP 1.0 release. GIMP 1.0 included a new tile based memory management system which enabled editing of larger images and a change in the plug-in API (Application programming interface) allowed scripts to be safely called from other scripts and to be self documenting. GIMP 1.0 also introduced a native file format (xcf) with support for layers, guides and selections (active channels). An official website was constructed for GIMP during the 1.0 series, designed by Adrian Likins and Jens Lautenbacher, now found at classic.gimp.org which provided introductory tutorials and additional resources. On 13 April 1997, GIMP News was started by Zach Beane, a site that announced plug-ins, tutorials and articles written about GIMP. May 1997, Seth Burgess started GIMP Bugs, the first 'electronic bug list'. Marc Lehmann developed a perl programming plug-in. Web interfaces were possible with the GIMP 1.0 series, and GIMP Net-fu can still be used for online graphics generation. The GIMP 1.1 series focused on fixing bugs and improving the port to Windows. No official release occurred during this series. Following this the odd numbered series (e.g. 1.1) of GIMP releases were considered unstable development releases and even numbered releases (e.g. 1.2) were considered stable releases. By this time, GTK+ had become a significant project and many of GIMP's original developers turned to GTK+ development. These included Owen Taylor (author of GIMP ifsCompose), Federico Mena, Tim Janik, Shawn Amundson and others. GNOME also attracted GIMP developers. The primary GIMP developers during this period were Manish Singh, Michael Natterer Sven Neumann and Tor Lillqvist who primarily fixed issues so that GIMP could run on Win32. GIMP 1.2.0 was released on 25 December 2000 (in time for that Christmas). GIMP 1.2 had a new development team of Manish Singh, Sven Neumann and Michael Natterer and others. GIMP 1.2 offered internationalization options, improved installation dialogs, many bug fixes (in GIMP and GTK+), overhauled plug-ins, reduced memory leaks and reorganized menus. New plug-ins included GIMPressionist and Sphere Designer by Vidar Madsen; Image Map by Maurits Rijk; GFlare by Eiichi Takamori; Warp by John P. Beale, Stephen Robert Norris and Federico Mena Quintero; and Sample Colorize and Curve Bend by Wolfgang Hofer. New tools included a new path tool, a new airbrush tool, a resizable toolbox, enhanced pressure support, a measure tool, dodge, burn and smudge tools. New functionality included image pipes, jpeg save preview, a new image navigation window, scaled brush previews, selection to path, drag'n'drop, quickmask, a help browser, tear-off menus and the waterselect plug-in was integrated into the color-selector. The 1.2 series was the final GIMP 1 series. GIMP 2.0.0 was released on 23 March 2004. The biggest visible change was the transition to the GTK+ 2.x toolkit. Among the major changes in GIMP 2.2 are: Major revisions in interface and tools were made available with the GIMP 2.4.0 release on 24 October 2007. Rewritten selection tools, use of the Tango style guidelines for a polished UI on all platforms, a foreground selection tool, and support for the ABR brush filetype along with the ability to resize brushes were some of the many updates. More major revisions in interface and tools were made available with the 2.6.0 release on 1 October 2008. There were large changes in the UI, free select tool and brush tools, and lesser changes in the code base. Also, partial tool level integration of GEGL was enacted that is supposed to lead to higher color depths as well as non-destructive editing in future versions. Starting from the first bugfix version, GIMP 2.6.1, "The Utility Window Hint", that enforced MDI behavior on Microsoft Windows, as opposed to only being supported in GNOME. GIMP 2.8 was released on 3 May 2012 with several revisions to the user interface. These include a redesigned save/export menu that aims to reinforce the idea that information is lost when exporting. The text tool was also redesigned so that a user edits text on canvas instead of in a separate dialog window. This feature was one of the Google Summer of Code (GSoC) projects from 2006. GIMP 2.8 also features layer groups, simple math in size entry fields, JPEG2000 support, PDF export, a webpage screenshot utility, and a single-window mode. GEGL has also received its first stable release (0.1), where the Application Programming Interface is considered mostly stable; GEGL has continued to be integrated into GIMP, now handling layer projection, this is a major step forward into full integration of GEGL that will allow GIMP to have better non-destructive work-flows in future releases. GEGL 0.2.0 is integrated into 2.8.xx. GIMP 2.10.0 was released on 27 April 2018. The team allowed for new features to enter stable releases: GIMP 3.0 will be the first release ported to GTK3. It had been developed in parallel to 2.9, but with low priority until the release of 2.10. Cleaning of code is now one of the focus areas of development. Version 2.99.2 will be the first public development version of 3.0 and is likely to be available in 2020. Non-destructive editing is the main focus in this future version. GIMP is primarily developed by volunteers as a free and open source software project associated with both the GNU and GNOME projects. Development takes place in a public git source code repository, on public mailing lists and in public chat channels on the GIMPNET IRC network. New features are held in public separate source code branches and merged into the main (or development) branch when the GIMP team is sure they won't damage existing functions. Sometimes this means that features that appear complete do not get merged or take months or years before they become available in GIMP. GIMP itself is released as source code. After a source code release installers and packages are made for different operating systems by parties who might not be in contact with the maintainers of GIMP. The version number used in GIMP is expressed in a "major-minor-micro" format, with each number carrying a specific meaning: the first (major) number is incremented only for major developments (and is currently 2). The second (minor) number is incremented with each release of new features, with odd numbers reserved for in-progress development versions and even numbers assigned to stable releases; the third (micro) number is incremented before and after each release (resulting in even numbers for releases, and odd numbers for development snapshots) with any bug fixes subsequently applied and released for a stable version. Each year GIMP applies for several positions in the Google Summer of Code (GSoC); to date GIMP has participated in all years except 2007. From 2006 to 2009 there have been nine GSoC projects that have been listed as successful, although not all successful projects have been merged into GIMP immediately. The healing brush and perspective clone tools and Ruby bindings were created as part of the 2006 GSoC and can be used in version 2.8.0 of GIMP, although there were three other projects that were completed and are later available in a stable version of GIMP; those projects being Vector Layers (end 2008 in 2.8 and master), and a JPEG 2000 plug-in (mid 2009 in 2.8 and master). Several of the GSoC projects were completed in 2008, but have been merged into a stable GIMP release later in 2009 to 2014 for Version 2.8.xx and 2.9.x. Some of them needed some more code work for the master tree. Second public Development 2.9-Version was 2.9.4 with many deep improvements after initial Public Version 2.9.2. Third Public 2.9-Development version is Version 2.9.6. One of the new features is removing the 4GB size limit of XCF file. Increase of possible threads to 64 is also an important point for modern parallel execution in actual AMD Ryzen and Intel Xeon processors. Version 2.9.8 included many bug fixes and improvements in gradients and clips. Improvements in performance and optimization beyond bug hunting were the development targets for 2.10.0. MacOS Beta is available with Version 2.10.4 The next stable version in the roadmap is 3.0 with a GTK3 port. The user interface of GIMP is designed by a dedicated design and usability team. This team was formed after the developers of GIMP signed up to join the OpenUsability project. A user-interface brainstorming group has since been created for GIMP, where users of GIMP can send in their suggestions as to how they think the GIMP user interface could be improved. GIMP is presented in two forms, "single" and "multiple" window mode; GIMP 2.8 defaults to the multiple-window mode. In multiple-window mode a set of windows contains all GIMP's functionality. By default, tools and tool settings are on the left and other dialogues are on the right. A layers tab is often to the right of the tools tab, and allows a user to work individually on separate image layers. Layers can be edited by right-clicking on a particular layer to bring up edit options for that layer. The tools tab and layers tab are the most common dockable tabs. The Libre Graphics Meeting (LGM) is a yearly event where developers of GIMP and other projects meet up to discuss issues related to free and open-source graphics software. The GIMP developers hold birds of a feather (BOF) sessions at this event. The current version of GIMP works with numerous operating systems, including Linux, macOS and Microsoft Windows. Many Linux distributions include GIMP as a part of their desktop operating systems, including Fedora and Debian. The GIMP website links to binary installers compiled by Jernej Simončič for the platform. MacPorts was listed as the recommended provider of Mac builds of GIMP, but this is no longer needed as version 2.8.2 and later run natively on macOS. GTK+ was originally designed to run on an X11 server. Because macOS can optionally use an X11 server, porting GIMP to macOS is simpler compared to creating a Windows port. GIMP is also available as part of the "Ubuntu noroot" package from the Google Play Store on Android. In November 2013, GIMP removed its download from SourceForge, citing misleading download buttons that potentially confuse customers, as well as SourceForge's own Windows installer, which bundles potentially unwanted programs. In a statement, GIMP called SourceForge a once "useful and trustworthy place to develop and host FLOSS applications" that now faces "a problem with the ads they allow on their sites ..." GIMP, who discontinued their use of SourceForge as a download mirror site in November 2013, reported in May 2015 that SourceForge was hosting infected versions of their Windows binaries on the site's Open Source Mirror directory. Lifewire reviewed GIMP favorably in March 2019, writing that "(f)or those who have never experienced Photoshop, GIMP is simply a very powerful image manipulation program," and "(i)f you're willing to invest some time learning it, it can be a very good graphics tool." GIMP's fitness for use in professional environments is regularly reviewed; it is often compared to and suggested as a possible replacement for Adobe Photoshop. GIMP has similar functionality to Photoshop, but has a different user interface. GIMP 2.6 was used to create nearly all of the art in "Lucas the Game", an independent video game by developer Timothy Courtney. Courtney started development of "Lucas the Game" in early 2014, and the video game was published in July 2015 for PC and Mac. Courtney explains GIMP is a powerful tool, fully capable of large professional projects, such as video games. The single-window mode introduced in GIMP 2.8 was reviewed in 2012 by Ryan Paul of "Ars Technica", who noted that it made the user experience feel "more streamlined and less cluttered". Michael Burns, writing for "Macworld" in 2014, described the single-window interface of GIMP 2.8.10 as a "big improvement". In his review of GIMP for "ExtremeTech" in October 2013, David Cardinal noted that GIMP's reputation of being hard to use and lacking features has "changed dramatically over the last couple years", and that it was "no longer a crippled alternative to Photoshop". He described GIMP's scripting as one of its strengths, but also remarked that some of Photoshop's features such as Text, 3D commands, Adjustment Layers and History are either less powerful or missing in GIMP. Cardinal favorably described the UFRaw converter for raw images used with GIMP, noting that it still "requires some patience to figure out how to use those more advanced capabilities". Cardinal stated that GIMP is "easy enough to try" despite not having as well developed documentation and help system as those for Photoshop, concluding that it "has become a worthy alternative to Photoshop for anyone on a budget who doesn't need all of Photoshop's vast feature set". Wilber is the official GIMP mascot. Wilber has relevance outside of GIMP as a racer in SuperTuxKart and was displayed on the Bibliothèque nationale de France (National Library of France) as part of Project Blinkenlights. Wilber was created at some time before 25 September 1997 by Tuomas Kuosmanen ("tigert") and has since received additional accessories and a construction kit to ease the process. Tools used to perform image editing can be accessed via the toolbox, through menus and dialogue windows. They include filters and brushes, as well as transformation, selection, layer and masking tools. There are several ways of selecting colors, including palettes, color choosers and using an eyedropper tool to select a colour on the canvas. The built-in color choosers include RGB/HSV selector or scales, water-color selector, CMYK selector and a color-wheel selector. Colors can also be selected using hexadecimal color codes as used in HTML color selection. GIMP has native support for indexed colour and RGB color spaces; other color spaces are supported using decomposition where each channel of the new color space becomes a black-and-white image. CMYK, LAB and HSV (hue, saturation, value) are supported this way. Color blending can be achieved using the Blend tool, by applying a gradient to the surface of an image and using GIMP's color modes. Gradients are also integrated into tools such as the brush tool, when the user paints this way the output color slowly changes. There are a number of default gradients included with GIMP; a user can also create custom gradients with tools provided. Gradient plug-ins are also available. GIMP selection tools include a rectangular and circular selection tool, free select tool, and fuzzy select tool (also known as magic wand). More advanced selection tools include the select by color tool for selecting contiguous regions of color—and the scissors select tool, which creates selections semi-automatically between areas of highly contrasting colors. GIMP also supports a quick mask mode where a user can use a brush to paint the area of a selection. Visibly this looks like a red colored overlay being added or removed. The foreground select tool is an implementation of Simple Interactive Object Extraction (SIOX) a method used to perform the extraction of foreground elements, such as a person or a tree in focus. The Paths Tool allows a user to create vectors (also known as Bézier curves). Users can use paths to create complex selections, including around natural curves. They can paint (or "stroke") the paths with brushes, patterns, or various line styles. Users can name and save paths for reuse. There are many tools that can be used for editing images in GIMP. The more common tools include a paint brush, pencil, airbrush, eraser and ink tools used to create new or blended pixels. The Bucket Fill tool can be used to fill a selection with a color or pattern. The Blend tool can be used to fill a selection with a color gradient. These color transitions can be applied to large regions or smaller custom path selections. GIMP also provides "smart" tools that use a more complex algorithm to do things that otherwise would be time-consuming or impossible. These include: An image being edited in GIMP can consist of many layers in a stack. The user manual suggests that "A good way to visualize a GIMP image is as a stack of transparencies," where in GIMP terminology, each level (analogous to a transparency) is called a layer. Each layer in an image is made up of several channels. In an RGB image, there are normally 3 or 4 channels, each consisting of a red, green and blue channel. Color sublayers look like slightly different gray images, but when put together they make a complete image. The fourth channel that may be part of a layer is the alpha channel (or layer mask). This channel measures opacity where a whole or part of an image can be completely visible, partially visible or invisible. Each layer has a layer mode that can be set to change the colors in the image. Text layers can be created using the text tool, allowing a user to write on an image. Text layers can be transformed in several ways, such as converting them to a path or selection. GIMP has approximately 150 standard effects and filters, including Drop Shadow, Blur, Motion Blur and Noise. GIMP operations can be automated with scripting languages. The Script-Fu is a Scheme-based language implemented using a TinyScheme interpreter built into GIMP. GIMP can also be scripted in Perl, Python (Python-Fu), or Tcl, using interpreters external to GIMP. New features can be added to GIMP not only by changing program code (GIMP core), but also by creating plug-ins. These are external programs that are executed and controlled by the main GIMP program. MathMap is an example of a plug-in written in C. There is support for several methods of sharpening and blurring images, including the blur and sharpen tool. The unsharp mask tool is used to sharpen an image selectively — it sharpens only those areas of an image that are sufficiently detailed. The Unsharp Mask tool is considered to give more targeted results for photographs than a normal sharpening filter. The Selective Gaussian Blur tool works in a similar way, except it blurs areas of an image with little detail. The "Generic Graphics Library" ("GEGL") was first introduced as part of GIMP on the 2.6 release of GIMP. This initial introduction does not yet exploit all of the capabilities of GEGL; as of the 2.6 release, GIMP can use GEGL to perform high bit-depth color operations; because of this less information is lost when performing color operations. When GEGL is fully integrated, GIMP will have a higher color bit depth and better non-destructive work-flow. GIMP 2.8.xx supports only 8-bit of color, which is much less than what e.g. digital cameras produce (12-bit or more). Full support for high bit depth is included with GIMP 2.10. OpenCL enables hardware acceleration for some operations. GIMP supports importing and exporting with a large number of different file formats, GIMP's native format XCF is designed to store all information GIMP can contain about an image; XCF is named after the e"X"perimental "C"omputing "F"acility where GIMP was authored. Import and export capability can be extended to additional file formats by means of plug-ins. XCF file size is extended to more than 4 GB since 2.9.6 and new stable tree 2.10.x. Because of the free and open-source nature of GIMP, several forks, variants and derivatives of the computer program have been created to fit the needs of their creators. While GIMP is cross-platform, variants of GIMP may not be. These variants are neither hosted nor linked on the GIMP site. The GIMP site does not host GIMP builds for Windows or Unix-like operating systems either, although it does include a link to a Windows build. Variants include:
https://en.wikipedia.org/wiki?curid=12628
Global illumination Global illumination (shortened as GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source ("direct illumination"), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not ("indirect illumination"). Theoretically, reflections, refractions, and shadows are all examples of global illumination, because when simulating them, one object affects the rendering of another (as opposed to an object being affected only by a direct source of light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination. Images rendered using global illumination algorithms often appear more photorealistic than those using only direct illumination algorithms. However, such images are computationally more expensive and consequently much slower to generate. One common approach is to compute the global illumination of a scene and store that information with the geometry (e.g., radiosity). The stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly. Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport, ambient occlusion, photon mapping, and image based lighting are all examples of algorithms used in global illumination, some of which may be used together to yield results that are not fast, but accurate. These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene. The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design. Achieving accurate computation of global illumination in real-time remains difficult. In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power. More and more specialized algorithms are used in 3D programs that can effectively simulate the global illumination. These algorithms are numerical approximations to the rendering equation. Well known algorithms for computing global illumination include path tracing, photon mapping and radiosity. The following approaches can be distinguished here: In Light path notation global lighting the paths of the type L (D | S) corresponds * E. A full treatment can be found in Another way to simulate real global illumination is the use of high dynamic range images (HDRIs), also known as environment maps, which encircle and illuminate the scene. This process is known as image-based lighting.
https://en.wikipedia.org/wiki?curid=12629
Geometric series In mathematics, a geometric series is a series with a constant ratio between successive terms. For example, the series is geometric, because each successive term can be obtained by multiplying the previous term by 1/2. Geometric series are among the simplest examples of infinite series with finite sums, although not all of them have this property. Historically, geometric series played an important role in the early development of calculus, and they continue to be central in the study of convergence of series. Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, and finance. The terms of a geometric series form a geometric progression, meaning that the ratio of successive terms in the series is constant. This relationship allows for the representation of a geometric series using only two terms, "r" and "a". The term "r" is the common ratio, and "a" is the first term of the series. As an example the geometric series given in the introduction, may simply be written as The following table shows several geometric series with different start terms and common ratios: The behavior of the terms depends on the common ratio "r": The sum of a geometric series is finite as long as the absolute value of the ratio is less than 1; as the numbers near zero, they become insignificantly small, allowing a sum to be calculated despite the series containing infinitely many terms. The sum can be computed using the self-similarity of the series. Consider the sum of the following geometric series: This series has common ratio 2/3. If we multiply through by this common ratio, then the initial 1 becomes a 2/3, the 2/3 becomes a 4/9, and so on: This new series is the same as the original, except that the first term is missing. Subtracting the new series (2/3)"s" from the original series "s" cancels every term in the original but the first, A similar technique can be used to evaluate any self-similar expression. For formula_9, the sum of the first "n" terms of a geometric series is where is the first term of the series, and is the common ratio. We can derive the formula for the sum, "s", as follows: As goes to infinity, the absolute value of must be less than one for the series to converge. The sum then becomes When , this can be simplified to the left-hand side being a geometric series with common ratio . The formula also holds for complex , with the corresponding restriction, the modulus of is strictly less than one. We can prove that the geometric series converges using the sum formula for a geometric progression: Since (1 + "r" + "r"2 + ... + "r""n")(1−"r") = 1−"r""n"+1 and for | "r" |  of Euclid's "Elements" expresses the partial sum of a geometric series in terms of members of the series. It is equivalent to the modern formula. In economics, geometric series are used to represent the present value of an annuity (a sum of money to be paid in regular intervals). For example, suppose that a payment of $100 will be made to the owner of the annuity once per year (at the end of the year) in perpetuity. Receiving $100 a year from now is worth less than an immediate $100, because one cannot invest the money until one receives it. In particular, the present value of $100 one year in the future is $100 / (1 + formula_35 ), where formula_35 is the yearly interest rate. Similarly, a payment of $100 two years in the future has a present value of $100 / (1 + formula_35)2 (squared because two years' worth of interest is lost by not receiving the money right now). Therefore, the present value of receiving $100 per year in perpetuity is which is the infinite series: This is a geometric series with common ratio 1 / (1 + formula_35 ). The sum is the first term divided by (one minus the common ratio): For example, if the yearly interest rate is 10% (formula_35 = 0.10), then the entire annuity has a present value of $100 / 0.10 = $1000. This sort of calculation is used to compute the APR of a loan (such as a mortgage loan). It can also be used to estimate the present value of expected stock dividends, or the terminal value of a security. The formula for a geometric series can be interpreted as a power series in the Taylor's theorem sense, converging where formula_44. From this, one can extrapolate to obtain other power series. For example, By differentiating the geometric series, one obtains the variant Similarly obtained are:
https://en.wikipedia.org/wiki?curid=12630
List of islands of Greece Greece has many islands, with estimates ranging from somewhere around 1,200 to 6,000, depending on the minimum size to take into account. The number of inhabited islands is variously cited as between 166 and 227. The largest Greek island by area is Crete, located at the southern edge of the Aegean Sea. The second largest island is Euboea, which is separated from the mainland by the 60m-wide Euripus Strait, and is administered as part of the Central Greece region. After the third and fourth largest Greek Islands, Lesbos and Rhodes, the rest of the islands are two-thirds of the area of Rhodes, or smaller. The Greek islands are traditionally grouped into the following clusters: the Argo-Saronic Islands in the Saronic gulf near Athens; the Cyclades, a large but dense collection occupying the central part of the Aegean Sea; the North Aegean islands, a loose grouping off the west coast of Turkey; the Dodecanese, another loose collection in the southeast between Crete and Turkey; the Sporades, a small tight group off the coast of Euboea; and the Ionian Islands, chiefly located to the west of the mainland in the Ionian Sea. Crete with its surrounding islets and Euboea are traditionally excluded from this grouping. This article excludes the Peloponnese, which has technically been an island since the construction of the Corinth Canal in 1893, but is rarely considered to be an island. The following are the largest Greek islands listed by surface area. The table includes all islands of over . This list includes islands that may have been inhabited in the past but are now uninhabited: This is a list of islands, islets, and rocks that surround the island of Crete.
https://en.wikipedia.org/wiki?curid=12634
Gian Lorenzo Bernini Gian Lorenzo (or Gianlorenzo) Bernini (, also , ; also Giovanni Lorenzo; 7 December 1598 – 28 November 1680) was an Italian sculptor and architect. While a major figure in the world of architecture, he was, also and even more prominently, the leading sculptor of his age, credited with creating the Baroque style of sculpture. As one scholar has commented, "What Shakespeare is to drama, Bernini may be to sculpture: the first pan-European sculptor whose name is instantaneously identifiable with a particular manner and vision, and whose influence was inordinately powerful..." In addition, he was a painter (mostly small canvases in oil) and a man of the theater: he wrote, directed and acted in plays (mostly Carnival satires), for which he designed stage sets and theatrical machinery. He produced designs as well for a wide variety of decorative art objects including lamps, tables, mirrors, and even coaches. As an architect and city planner, he designed secular buildings, churches, chapels, and public squares, as well as massive works combining both architecture and sculpture, especially elaborate public fountains and funerary monuments and a whole series of temporary structures (in stucco and wood) for funerals and festivals. His broad technical versatility, boundless compositional inventiveness and sheer skill in manipulating marble ensured that he would be considered a worthy successor of Michelangelo, far outshining other sculptors of his generation. His talent extended beyond the confines of sculpture to a consideration of the setting in which it would be situated; his ability to synthesize sculpture, painting, and architecture into a coherent conceptual and visual whole has been termed by the late art historian Irving Lavin the "unity of the visual arts". Bernini was born in Naples in 1598 to Angelica Galante and Mannerist sculptor Pietro Bernini, originally from Florence. He was the sixth of their thirteen children. Gian Lorenzo Bernini was the definition of childhood genius. He was "recognized as a prodigy when he was only eight years old, [and] he was consistently encouraged by his father, Pietro. His precocity earned him the admiration and favor of powerful patrons who hailed him as 'the Michelangelo of his century'”. In 1606 his father received a papal commission (to contribute a marble relief in the Cappella Paolina of Santa Maria Maggiore) and so moved from Naples to Rome, taking his entire family with him and continuing in earnest the training of his son Gian Lorenzo. Several extant works, dating from circa 1615-1620, are by general scholarly consensus, collaborative efforts by both father and son: they include the "Faun Teased by Putti" (c. 1615, Metropolitan Museum, NYC), "Boy with a Dragon" (c. 1616-17, Getty Museum, Los Angeles), the Aldobrandini "Four Seasons" (c. 1620, private collection), and the recently discovered "Bust of the Savior" (1615–16, New York, private collection). Sometime after the arrival of the Bernini family in Rome, word about the great talent of the boy Gian Lorenzo got around and he soon caught the attention of Cardinal Scipione Borghese, nephew to the reigning pope, Paul V, who spoke of the boy genius to his uncle. Bernini was therefore presented before Pope Paul V, curious to see if the stories about Gian Lorenzo's talent were true. The boy improvised a sketch of Saint Paul for the marveling pope, and this was the beginning of the pope’s attention on this young talent. Once he was brought to Rome, he rarely left its walls, except (much against his will) for a five-month stay in Paris in the service of King Louis XIV and brief trips to nearby towns (including Civitavecchia, Tivoli and Castelgandolfo), mostly for work-related reasons. Rome was Bernini’s city: “'You are made for Rome,’ said Pope Urban VIII to him, 'and Rome for you'”. It was in this world of 17th-century Rome and the international religious-political power which resided there that Bernini created his greatest works. Bernini's works are therefore often characterized as perfect expressions of the spirit of the assertive, triumphal but self-defensive Counter Reformation Roman Catholic Church. Certainly Bernini was a man of his times and deeply religious (at least later in life), but he and his artistic production should not be reduced simply to instruments of the papacy and its political-doctrinal programs, an impression that is at times communicated by the works of the three most eminent Bernini scholars of the previous generation, Rudolf Wittkower, Howard Hibbard, and Irving Lavin. As Tomaso Montanari's recent revisionist monograph, "La libertà di Bernini" (Turin: Einaudi, 2016) argues and Franco Mormando's anti-hagiographic biography, "Bernini: His Life and His Rome" (Chicago: University of Chicago Press, 2011), illustrates, Bernini and his artistic vision maintained a certain degree of freedom from the mindset and mores of Counter-Reformation Roman Catholicism. Under the patronage of the extravagantly wealthy and most powerful Cardinal Scipione Borghese, the young Bernini rapidly rose to prominence as a sculptor. Among his early works for the cardinal were decorative pieces for the garden of the Villa Borghese, such as "The Goat Amalthea with the Infant Jupiter and a Faun". This marble sculpture (executed sometime before 1615) is generally considered by scholars to be the earliest work executed entirely by Bernini himself. Other allegorical busts also date to this period, including the so-called "Damned Soul" and "Blessed Soul" of circa 1619, which may have been influenced by a set of prints by Pieter de Jode I but which were in fact unambiguously cataloged in the inventory of their first documented owner, Fernando de Botinete y Acevedo, as depicting a nymph and a satyr, a commonly paired duo in ancient sculpture (they were not commissioned by nor ever belonged to either Scipione Borghese or, as most scholarship erroneously claims, the Spanish cleric, Pedro Foix Montoya). By the time he was twenty-two, Bernini was considered talented enough to have been given a commission for a papal portrait, the "Bust of Pope Paul V", now in the J. Paul Getty Museum. Bernini's reputation, however, was definitively established by four masterpieces, executed between 1619 and 1625, all now displayed in the Galleria Borghese in Rome. To the art historian Rudolf Wittkower these four works—"Aeneas, Anchises, and Ascanius" (1619), "The Rape of Proserpina" (1621–22), "Apollo and Daphne" (1622–1625), and "David" (1623–24)—"inaugurated a new era in the history of European sculpture". It is a view repeated by other scholars, such as Howard Hibbard who proclaimed that, in all of the seventeenth century, "there were no sculptors or architects comparable to Bernini". Adapting the classical grandeur of Renaissance sculpture and the dynamic energy of the Mannerist period, Bernini forged a new, distinctly Baroque conception for religious and historical sculpture, powerfully imbued with dramatic realism, stirring emotion and dynamic, theatrical compositions. Bernini's early sculpture groups and portraits manifest "a command of the human form in motion and a technical sophistication rivalled only by the greatest sculptors of classical antiquity." Moreover, Bernini possessed the ability to depict highly dramatic narratives with characters showing intense psychological states, but also to organize large-scale sculptural works that convey a magnificent grandeur. Unlike sculptures done by his predecessors, these focus on specific points of narrative tension in the stories they are trying to tell: Aeneas and his family fleeing the burning Troy; the instant that Pluto finally grasps the hunted Persephone; the precise moment that Apollo sees his beloved Daphne begin her transformation into a tree. They are transitory but dramatic powerful moments in each story. Bernini's "David" is another stirring example of this. Michelangelo's motionless, idealized "David" shows the subject holding a rock in one hand and a sling in the other, contemplating the battle; similarly immobile versions by other Renaissance artists, including Donatello's, show the subject in his triumph after the battle with Goliath. Bernini illustrates David during his active combat with the giant, as he twists his body to catapult toward Goliath. To emphasize these moments, and to ensure that they were appreciated by the viewer, Bernini designed the sculptures with a specific viewpoint in mind. Their original placements within the Villa Borghese were against walls so that the viewers' first view was the dramatic moment of the narrative. The result of such an approach is to invest the sculptures with greater psychological energy. The viewer finds it easier to gauge the state of mind of the characters and therefore understands the larger story at work: Daphne's wide open mouth in fear and astonishment, David biting his lip in determined concentration, or Proserpina desperately struggling to free herself. In addition to portraying psychological realism, they show a greater concern for representing physical details. The tousled hair of Pluto, the pliant flesh of Proserpina, or the forest of leaves beginning to envelop Daphne all demonstrate Bernini's exactitude and delight for representing complex real world textures in marble form. Beginning in 1623, with the ascent of his friend and former tutor, Cardinal Maffeo Barberini, to the papal throne as Pope Urban VIII, Bernini enjoyed near monopolistic patronage from the Barberini pope and family:. The new Pope Urban is reported to have remarked, "It is a great fortune for you, O Cavaliere, to see Cardinal Maffeo Barberini made pope, but our fortune is even greater to have Cavalier Bernini alive in our pontificate." Although he did not fare so well during the reign of Innocent X, under Alexander VII, he once again regained pre-eminent artistic domination and continued to be held in high regard by Clement IX. His horizons rapidly and widely broadened: he was not just producing sculpture for private residences, but playing the most significant artistic (and engineering) role on the city stage, as sculptor, architect, and urban planner. His official appointments also testify to this—"curator of the papal art collection, director of the papal foundry at Castel Sant'Angelo, commissioner of the fountains of Piazza Navona". Such positions gave Bernini the opportunity to demonstrate his versatile skills throughout the city. To great protest from older, experienced master architects, he, with virtually no architectural training to his name, was appointed Chief Architect of St Peter's in 1629, upon the death of Carlo Maderno. From then on, Bernini's work and artistic vision would be placed at the symbolic heart of Rome. Bernini's artistic pre-eminence, particularly during the reign of pope Urban VIII (1623–1644) and again under Pope Alexander VII (1655–1665), meant he was able to secure the most important commissions in the Rome of his day, namely, the various massive embellishment projects of the newly finished St. Peter's Basilica, completed under Pope Paul V with the addition of Maderno's nave and facade and finally re-consecrated by Pope Urban VIII on 18 November 1626, after 150 years of planning and building. Within the basilica he was responsible for the Baldacchino, the decoration of the four piers under the cupola, the Cathedra Petri or Chair of St. Peter in the apse, the tomb monument of Matilda of Tuscany, the chapel of the Blessed Sacrament in the right nave, and the decoration (floor, walls and arches) of the new nave. The St Peter's Baldacchino immediately became the visual centerpiece of the new St. Peter's. Designed as a massive spiraling gilded bronze canopy over the tomb of St Peter, Bernini's four-pillared creation reached nearly from the ground and cost around 200,000 Roman scudi (about $8m in currency of the early 21st century). "Quite simply", writes one art historian, "nothing like it had ever been seen before". Soon after the St Peter's Baldacchino, Bernini undertook the whole-scale embellishment of the four massive piers at crossing of the basilica (i.e., the structures supporting the cupola) including, most notably, four colossal, theatrically dramatic statues, among them, the majestic St. Longinus executed by Bernini himself (the other three are by other contemporary sculptors François Duquesnoy, Francesco Mochi, and Bernini's disciple, Andrea Bolgi). In the basilica, Bernini also began work on the tomb for Urban VIII, completed only after Urban's death in 1644, one in a long, distinguished series of tombs and funerary monuments for which Bernini is famous and a traditional genre upon which his influence left an enduring mark, often copied by subsequent artists. Indeed, Bernini's final and most original tomb monument, the Tomb of Pope Alexander VII, in St. Peter's Basilica, represents, according to Erwin Panofsky, the very pinnacle of European funerary art, whose creative inventiveness subsequent artists could not hope to surpass. Begun and largely completed during Alexander VII's reign, Bernini's design of the Piazza San Pietro in front of the Basilica is one of his most innovative and successful architectural designs, which transformed a formerly irregular, inchoate open space into an aesthetically unified, emotionally thrilling, and logistically efficient (for carriages and crowds), completely in harmony with the pre-exiting buildings and other features and adding to the majesty of the basilica. Despite this engagement with public architecture, Bernini was still able to produce artworks that showed the gradual refinement of his portrait technique. A number of Bernini's sculptures show the continual evolution of his ability to capture the utterly distinctive personal characteristics that he saw in his sitters. This included a number of busts of Urban VIII himself, the family bust of Francesco Barberini or most notably, the Two Busts of Scipione Borghese—the second of which had been rapidly created by Bernini once a flaw had been found in the marble of the first. The transitory nature of the expression on Scipione's face is often noted by art historians, iconic of the Baroque concern for representing movement in static artworks. To Rudolf Wittkower the "beholder feels that in the twinkle of an eye not only might the expression and attitude change but also the folds of the casually arranged mantle". Portraits in marble include that of Costanza Bonarelli (executed around 1637), unusual in its more personal, intimate nature (in fact, it would appear to be the first fully finished marble portrait of a non-aristocratic woman by a major artist in European history). Bernini had an affair with Costanza, who was the wife of one of his assistants. When Bernini then suspected Costanza of involvement with his brother, he badly beat him and ordered a servant to slash her face with a razor. Pope Urban VIII intervened on his behalf, and he was simply fined. Beginning in the late 1630s, now known in Europe as one of the most accomplished portraitists in marble, Bernini also began to receive royal commissions from outside Rome, for subjects such as Cardinal Richelieu of France, Francesco I d'Este the powerful Duke of Modena, Charles I of England and his wife, Queen Henrietta Maria. The sculpture of Charles I was produced in Rome from a triple portrait (oil on canvas) executed by Van Dyck, that survives today in the British Royal Collection. The bust of Charles was lost in the Whitehall Palace fire of 1698 (though its design is known through contemporary copies and drawings) and that of Henrietta Maria was not undertaken due to the outbreak of the English Civil War. Under Urban VIII, Bernini had been appointed chief architect for the basilica of St. Peter's. Work by Bernini included the aforementioned Baldacchino and the St Longinus. In 1636, eager to finally finish the exterior of St. Peter's, Pope Urban ordered Bernini to design and build the two long-intended bell towers for its facade: the foundations of the two towers had already been designed and constructed (namely, the last bays at either extremity of the facade) by Carlo Maderno (architect of the nave and the facade) decades earlier. Once the first tower was finished in 1641, cracks began to appear in the facade but, curiously enough, work continued on the second tower and the first storey was completed. Despite the presence of the cracks, work only stopped in July 1642 once the papal treasury had been exhausted by the disastrous War of Castro. With the death of Pope Urban and the ascent to power of Barberini-enemy in 1644, Pope Innocent X Pamphilj, Bernini's enemies (especially Borromini) raised a great alarm over the cracks, predicting a disaster for the whole basilica and placing the blame entirely on Bernini. The subsequent investigations, in fact, revealed the cause of the cracks as Maderno's defective foundations and not Bernini's elaborate design, an exoneration later confirmed by the meticulous investigation conducted in 1680 under Pope Innocent XI. Nonetheless, Bernini's opponents in Rome succeeded in seriously damaging the reputation of Urban's artist and in persuading the pope to order (in February 1646) the complete demolition of both towers, to Bernini's great humiliation and indeed financial detriment. After this, one of the rare failures of his career, Bernini retreated into himself: according to his son, Domenico. his subsequent unfinished statue of 1647, "Truth Unveiled by Time", was intended to be his self-consoling commentary on this affair, expressing his faith that eventually Time would reveal the actual Truth behind the story and exonerate him fully, as indeed did occur. Bernini did not entirely lose patronage, not even of the pope. Innocent X maintained Bernini in all of the official roles given to him by Urban. Under Bernini's design and direction, work continued on decorating the massive new but entirely unadorned nave of St. Peter's, with the addition of an elaborate multi-colored marble flooring, marble facing on the walls and pilasters, and scores of stuccoed statues and reliefs. It is not without reason that Pope Alexander VII once quipped, 'if one were to remove from Saint Peter's everything that had been made by the Cavalier Bernini, that temple would be stripped bare.' Indeed, given all of his many and various works within the basilica over several decades, it is to Bernini that is due the lion's share of responsibility for the final and enduring aesthetic appearance and emotional impact of St. Peter's. He was also allowed to continue to work on Urban VIII's tomb, despite Innocent's antipathy for the Barberini. A few months after completing Urban's tomb, Bernini won, in controversial circumstances, the Pamphilj commission for the prestigious Four Rivers Fountain on Piazza Navona, marking the end of his disgrace and the beginning a yet another glorious chapter in his life. If there had been doubts over Bernini's position as Rome's preeminent artist, the success of the Four Rivers Fountain removed them. Bernini continued to receive commissions from Pope Innocent X and other senior members of Rome's clergy and aristocracy, as well as from exalted patrons outside of Rome, such as Francesco d'Este. In such an environment, Bernini's artistic style flourished. New types of funerary monument were designed, such as the seemingly floating medallion, hovering in the air as it were, for the deceased nun Maria Raggi, while chapels he designed, such as the Raimondi Chapel in the church of San Pietro in Montorio, illustrated how Bernini could use hidden lighting to help suggest divine intervention within the narratives he was depicting. The Cornaro Chapel showcased Bernini's ability to integrate sculpture, architecture, fresco, stucco, and lighting into "a marvelous whole" ("bel composto", to use early biographer Filippo Baldinucci's term to describe his approach to architecture) and thus create what scholar Irving Lavin has called the "unified work of art". The central focus of the Cornaro Chapel is the ecstasy of the Spanish nun and saint-mystic, Teresa of Avila. Bernini presents the spectator with a theatrically vivid portrait, in gleaming white marble, of the swooning Teresa and the quietly smiling angel, who delicately grips the arrow piercing the saint's heart. On either side of the chapel the artist places (in what can only strike the viewer as theater boxes), portraits in relief of various members of the Cornaro family – the Venetian family memorialized in the chapel, including Cardinal Federico Cornaro who commissioned the chapel from Bernini – who are in animated conversation among themselves, presumably about the event taking place before them. The result is a complex but subtly orchestrated architectural environment providing the spiritual context (a heavenly setting with a hidden source of light) that suggests to viewers the ultimate nature of this miraculous event. It was an artistic tour de force that incorporates all of the multiple forms of visual art and technique that Bernini had at his disposal, including hidden lighting, thin gilded beams, recessive architectural space, secret lens, and over twenty diverse types of colored marble: these all combine to create the final artwork—"a perfected, highly dramatic and deeply satisfying seamless ensemble". Upon his accession to the Chair of St Peter, Pope Alexander VII Chigi (1655–1667) began to implement his extremely ambitious plan to transform Rome into a magnificent world capital by means of systematic, bold (and costly) urban planning. In so doing, he brought to fruition the long, slow recreation of the urban glory of Rome—the "renovatio Romae"—that had begun in the fifteenth century under the Renaissance popes. Alexander immediately commissioned large-scale architectural changes in the city, for example connecting new and existing buildings by opening up streets and piazzas. Bernini's career showed a greater focus on designing buildings (and their immediate surroundings) during this pontificate, as there were far greater opportunities. Bernini's creations during this period include the piazza leading to St Peter's. In a previously broad, unstructured space, he created two massive semi-circular colonnades, each row of which was formed of four white columns. This resulted in an oval shape that formed an inclusive arena within which any gathering of citizens, pilgrims and visitors could witness the appearance of the pope—either as he appeared on the loggia on the facade of St Peter's or on balconies on the neighbouring Vatican palaces. Often likened to two arms reaching out from the church to embrace the waiting crowd, Bernini's creation extended the symbolic greatness of the Vatican area, creating an "exhilarating expanse" that was, architecturally, an "unequivocal success". Elsewhere within the Vatican, Bernini created systematic rearrangements and majestic embellishment of either empty or aesthetically undistinguished space that exist as he designed them to the present day and have become indelible icons of the splendor of the papal precincts. Within the hitherto unadorned apse of the basilica, the Cathedra Petri, the symbolic throne of St Peter, was rearranged as a monumental gilded bronze extravagance that matched the Baldacchino created earlier in the century. Bernini's complete reconstruction of the Scala Regia, the stately papal stairway between St. Peters's and the Vatican Palace, was slightly less ostentatious in appearance but still taxed Bernini's creative powers (employing, for example, clever tricks of optical illusion) to create a seemingly uniform, totally functional, but nonetheless regally impressive stairway to connect two irregular buildings within an even more irregular space. Not all works during this era were on such a large scale. Indeed, the commission Bernini received to build the church of Sant'Andrea al Quirinale for the Jesuits was relatively modest in physical size (though great in its interior chromatic splendor), which Bernini executed completely free of charge. Sant'Andrea shared with the St. Peter's piazza—unlike the complex geometries of his rival Francesco Borromini—a focus on basic geometric shapes, circles and ovals to create spiritually intense buildings. Equally, Bernini moderated the presence of colour and decoration within these buildings, focussing visitors' attention on these simple forms that underpinned the building. Sculptural decoration was never eliminated, but its use was more minimal. He also designed the church of Santa Maria dell'Assunzione in the town of Ariccia with its circular outline, rounded dome and three-arched portico. At the end of April 1665, and still considered the most important artist in Rome, if indeed not in all of Europe, Bernini was forced by political pressure (from both the French court and Pope Alexander VII) to travel to Paris to work for King Louis XIV, who required an architect to complete work on the royal palace of the Louvre. Bernini would remain in Paris until mid-October. Louis XIV assigned a member of his court to serve as Bernini's translator, tourist guide, and overall companion, Paul Fréart de Chantelou, who kept a "Journal" of Bernini's visit that records much of Bernini's behaviour and utterances in Paris. The writer Charles Perrault, who was serving at this time as an assistant to the French Finance Minister Jean-Baptiste Colbert, also provided a first-hand account of Bernini's visit. Bernini's popularity was such that on his walks in Paris the streets were lined with admiring crowds. But things soon turned sour. Bernini presented finished designs for the east front (i.e., the all-important principal facade of the entire palace) of the Louvre, which were ultimately rejected, albeit formally not until 1667, well after his departure from Paris (indeed, the already constructed foundations for Bernini's Louvre addition were inaugurated in October 1665 in an elaborate ceremony, with both Bernini and King Louis in attendance). It is often stated in the scholarship on Bernini that his Louvre designs were turned down because Louis and his financial advisor Jean-Baptiste Colbert considered them too Italianate or too Baroque in style. In fact, as Franco Mormando points out, "aesthetics are "never" mentioned in any of [the] . . . surviving memos" by Colbert or any of the artistic advisors at the French court. The explicit reasons for the rejections were utilitarian, namely, on the level of physical security and comfort (e.g., location of the latrines). It is also indisputable that there was an interpersonal conflict between Bernini and the young French king, each one feeling insufficiently respected by the other. Though his design for the Louvre went unbuilt, it circulated widely throughout Europe by means of engravings and its direct influence can be seen in subsequent stately residences such as Chatsworth House, Derbyshire, England, seat of the Dukes of Devonshire. Other projects in Paris suffered a similar fate. With the exception of Chantelou, Bernini failed to forge significant friendships at the French court. His frequent negative comments on various aspects of French culture, especially its art and architecture, did not go down well, particularly in juxtaposition to his praise for the art and architecture of Italy (especially Rome); he said that a painting by Guido Reni was worth more than all of Paris. The sole work remaining from his time in Paris is the "Bust of Louis XIV" although he also contributed a great deal to the execution of the Christ Child Playing with a Nail marble relief (now in the Louvre) by his son Paolo as a gift to the Queen of France. Back in Rome, Bernini created a monumental equestrian statue of Louis XIV; when it finally reached Paris (in 1685, five years after the artist's death), the French king found it extremely repugnant and wanted it destroyed; it was instead re-carved into a representation of the ancient Roman hero Marcus Curtius. Bernini remained physically and mentally vigorous and active in his profession until just two weeks before his death that came as a result of a stroke. The pontificate of his old friend, Clement IX, was too short (barely two years) to accomplish more than the dramatic refurbishment by Bernini of the Ponte Sant'Angelo, while the artist's elaborate plan, under Clement, for a new apse for the basilica of Santa Maria Maggiore came to an unpleasant end in the midst of public uproar over its cost and the destruction of ancient mosaics that it entailed. The last two popes of Bernini's life, Clement X and Innocent XI, were both not especially close or sympathetic to Bernini and not particularly interested in financing works of art and architecture, especially given the disastrous conditions of the papal treasury. The most important commission by Bernini, executed entirely by him in just six months in 1674, under Clement X was the statue of the Blessed Ludovica Albertoni, another nun-mystic. The work, reminiscent of Bernini's "Ecstasy of Saint Teresa," is located in the chapel dedicated to Ludovica remodeled under Bernini's supervision in the Trastevere church of San Francesco in Ripa, whose facade was designed by Bernini's disciple, Mattia de' Rossi. In his last two years, Bernini also carved (supposedly for Queen Christina) the bust of the Savior (Basilica of San Sebastiano fuori le Mura, Rome) and supervised the restoration of the historic Palazzo della Cancelleria as per papal commission under Innocent XI. The latter commission is outstanding confirmation of both Bernini's continuing professional reputation and good health of mind and body even in advanced old age, inasmuch as the pope had chosen him over any number of talented younger architects plentiful in Rome, for this prestigious and most difficult assignment since, as his son Domenico points out, "deterioration of the palace had advanced to such an extent that the threat of its imminent collapse was quite apparent." Shortly after the completion of the latter project, Bernini died in his home on 28 November 1680 and was buried, with little public fanfare, in the simple, unadorned Bernini family vault, along with his parents, in the Basilica di Santa Maria Maggiore. Though an elaborate funerary monument had once been planned (documented by a single extant sketch of circa 1670 by disciple Ludovico Gimignani), it was never built and Bernini remained with no permanent public acknowledgement of his life and career in Rome until 1898 when, on the anniversary of his birth, a simple plaque and small bust was affixed to the face of his home on the Via della Mercede, proclaiming "Here lived and died Gianlorenzo Bernini, a sovereign of art, before whom reverently bowed popes, princes, and a multitude of peoples." In the 1630s he engaged in an affair with a married woman named Costanza (wife of his workshop assistant, Matteo Bonucelli, also called Bonarelli) and sculpted a bust of her (now in the Bargello, Florence) during the height of their romance. She later had an affair with his younger brother, Luigi, who was Bernini's right-hand man in his studio. When Gian Lorenzo found out about Costanza and his brother, in a fit of mad fury, he chased Luigi through the streets of Rome and into the basilica of Santa Maria Maggiore, threatening his life. To punish his unfaithful mistress, Bernini had a servant go to the house of Costanza, where the servant slashed her face several times with a razor. The servant was later jailed, and Costanza was jailed for adultery; Bernini himself was exonerated by the pope, even though he had committed a crime in ordering the face-slashing. Soon after, in May 1639, at age forty-one, Bernini wed a twenty-two-year-old Roman woman, Caterina Tezio, in an arranged marriage, under orders from Pope Urban. She bore him eleven children, including youngest son Domenico Bernini, who would later be his first biographer. After his never-repeated fit of passion and bloody rage and his subsequent marriage, Bernini turned more sincerely to the practice of his faith, according to his early official biographers, whereas brother Luigi was to once again, in 1670, bring great grief and scandal to his family by his sodomitic rape of a young Bernini workshop assistant at the construction site of the 'Constantine' memorial in St. Peter's Basilica. Bernini's architectural works include sacred and secular buildings and sometimes their urban settings and interiors. He made adjustments to existing buildings and designed new constructions. Amongst his most well known works are the Piazza San Pietro (1656–67), the piazza and colonnades in front of St. Peter's Basilica and the interior decoration of the Basilica. Amongst his secular works are a number of Roman palaces: following the death of Carlo Maderno, he took over the supervision of the building works at the Palazzo Barberini from 1630 on which he worked with Borromini; the Palazzo Ludovisi (now Palazzo Montecitorio, started 1650); and the Palazzo Chigi (now Palazzo Chigi-Odescalchi, started 1664). His first architectural projects were the façade and refurbishment of the church of Santa Bibiana (1624–26) and the St. Peter's baldachin (1624–33), the bronze columned canopy over the high altar of St. Peter's Basilica. In 1629, and before "St. Peter's Baldachin" was complete, Urban VIII put him in charge of all the ongoing architectural works at St Peter's. However, Bernini fell out of favor during the papacy of Innocent X Pamphili: one reason was the pope's animosity towards the Barberini and hence towards their clients including Bernini. Another reason was the failure of the belltowers designed and built by Bernini for St. Peter's Basilica, commencing during the reign of Urban VIII. The completed north tower and the only partially completed south tower were ordered demolished by Innocent in 1646 because their excessive weight had caused cracks in the basilica's facade and threatened to do more calamitous damage. Professional opinion at the time was in fact divided over the true gravity of the situation (with Bernini's rival Borromini spreading an extreme, anti-Bernini catastrophic view of the problem) and over the question of responsibility for the damage: Who was to blame? Bernini? Pope Urban VIII who forced Bernini to design over-elaborate towers? Deceased Architect of St. Peter's, Carlo Maderno who built the weak foundations for the towers? Official papal investigations in 1680 in fact completely exonerated Bernini, while inculpating Maderno. Never wholly without patronage during the Pamphili years, after Innocent's death in 1655 Bernini regained a major role in the decoration of St. Peter's with the Pope Alexander VII Chigi, leading to his design of the piazza and colonnade in front of St. Peter's. Further significant works by Bernini at the Vatican include the "Scala Regia" (1663–66), the monumental grand stairway entrance to the Vatican Palace, and the "Cathedra Petri", the Chair of Saint Peter, in the apse of St. Peter's, in addition to the Chapel of the Blessed Sacrament in the nave. Bernini did not build many churches from scratch; rather, his efforts were concentrated on pre-existing structures, such as the restored church of Santa Bibbiana and in particular St. Peter's. He fulfilled three commissions for new churches in Rome and nearby small towns. Best known is the small but richly ornamented oval church of Sant'Andrea al Quirinale, done (beginning in 1658) for the Jesuit novitiate, representing one of the rare works of his hand with which Bernini's son, Domenico, reports that his father was truly and very pleased. Bernini also designed churches in Castelgandolfo (San Tommaso da Villanova, 1658–1661) and Ariccia (Santa Maria Assunta, 1662–1664), and was responsible for the re-modeling of the Santuario della Madonna di Galloro (just outside of Ariccia), endowing it with a majestic new facade. When Bernini was invited to Paris in 1665 to prepare works for Louis XIV, he presented designs for the east facade of the Louvre Palace, but his projects were ultimately turned down in favour of the more sober and classic proposals of a committee consisting of three Frenchmen: Louis Le Vau, Charles Le Brun, and the doctor and amateur architect Claude Perrault, signaling the waning influence of Italian artistic hegemony in France. Bernini's projects were essentially rooted in the Italian Baroque urbanist tradition of relating public buildings to their settings, often leading to innovative architectural expression in urban spaces like "piazze" or squares. However, by this time, the French absolutist monarchy now preferred the classicising monumental severity of the Louvre's facade, no doubt with the added political bonus that it had been designed by a Frenchmen. The final version did, however, include Bernini's feature of a flat roof behind a Palladian balustrade. During his lifetime Bernini lived in various residences throughout the city: principal among them, a palazzo right across from Santa Maria Maggiore and still extant at Via Liberiana 24, while his father was still alive; after his father's death in 1629, Bernini moved the clan to the long-ago-demolished Santa Marta neighborhood behind the apse of St. Peter's Basilica, which afforded him more convenient access to the Vatican Foundry and to his working studio also on the Vatican premises. In 1639, Bernini bought property on the corner of the via della Mercede and the via del Collegio di Propaganda Fide in Rome. This gave him the distinction of being the only one of two artists (the other is Pietro da Cortona) to be proprietor of his own large palatial (though not sumptuous) residence, furnished as well with its own water supply. Bernini refurbished and expanded the existing palazzo on the Via della Mercede site, at what are now Nos. 11 and 12. (The building is sometimes referred to as "Palazzo Bernini," but that title more properly pertains to the Bernini family's later and larger home on Via del Corso, to which they moved in the early nineteenth century, now known as the Palazzo Manfroni-Bernini.) Bernini lived at No. 11, but this was extensively remodeled in the 19th century. It is imagined that it must have been galling for Bernini to witness through the windows of his dwelling, the construction of the tower and dome of Sant'Andrea delle Fratte by his rival, Borromini, and also the demolition of the chapel that he, Bernini, had designed at the Collegio di Propaganda Fide to see it replaced by Borromini's chapel. The construction of Sant'Andrea, however, was completed by Bernini's close disciple, Mattia de' Rossi, and it contains (to this day) the marble originals of two of Bernini's own angels executed by the master for the Ponte Sant'Angelo. True to the decorative dynamism of Baroque which loved the aesthetic pleasure and emotional delight afforded by the sight and sound of water in motion, among Bernini's most gifted and applauded creations were his Roman fountains, which were both utilitarian public works and personal monuments to their patrons, papal or otherwise. His first fountain, the 'Barcaccia' (commissioned in 1627, finished 1629) at the foot of the Spanish Steps, cleverly surmounted a challenge that Bernini was to face in several other fountain commissions, the low water pressure in many parts of Rome (Roman fountains were all driven by gravity alone), creating a low-lying flat boat that was able to take greatest advantage of the small amount of water available. Another example is the long-ago dismantled "Woman Drying Her Hair" fountain that Bernini created for the no-longer-extant Villa Barberini ai Bastioni on the edge of the Janiculum Hill overlooking St. Peter's Basilica. His other fountains include the "Fountain of the Triton", or "Fontana del Tritone", and the Barberini Fountain of the Bees, the "Fontana delle Api". The Fountain of the Four Rivers, or "Fontana dei Quattro Fiumi", in the Piazza Navona is an exhilarating masterpiece of spectacle and political allegory in which Bernini again brilliantly overcame the problem of the piazza's low water pressure creating the illusion of an abundance of water that in reality did not exist. An oft-repeated, but false, anecdote tells that one of the Bernini's river gods defers his gaze in disapproval of the facade of Sant'Agnese in Agone (designed by the talented, but less politically successful, rival Francesco Borromini), impossible because the fountain was built several years before the façade of the church was completed. Bernini was also the artist of the statue of the Moor in "La Fontana del Moro" in Piazza Navona (1653). Bernini's "Triton Fountain" is depicted musically in the second section of Ottorino Respighi's "Fountains of Rome." Another major category of Bernini's activity was that of the tomb monument, a genre on which his distinctive new style exercised a decisive and long-enduring influence; included in this category are his tombs for Popes Urban VIII and Alexander VII (both in St. Peter's Basilica), Cardinal Domenico Pimental (Santa Maria sopra Minerva, Rome, design only), and Matilda of Canossa (St. Peter's Basilica). Related to the tomb monument is the funerary memorial, of which Bernini executed several (including that, most notably, of Maria Raggi [Santa Maria sopra Minerva, Rome] also of greatly innovative style and long enduring influence. Among his smaller commissions, although not mentioned by either of his earliest biographers, Baldinucci or Domenico Bernini, the Elephant and Obelisk is a sculpture located near the Pantheon, in the Piazza della Minerva, in front of the Dominican church of Santa Maria sopra Minerva. Pope Alexander VII decided that he wanted a small ancient Egyptian obelisk (that was discovered beneath the piazza) to be erected on the same site, and in 1665 he commissioned Bernini to create a sculpture to support the obelisk. The sculpture of an elephant bearing the obelisk on its back was executed by one of Bernini's students, Ercole Ferrata, upon a design by his master, and finished in 1667. An inscription on the base relates the Egyptian goddess Isis and the Roman goddess Minerva to the Virgin Mary, who supposedly supplanted those pagan goddesses and to whom the church is dedicated. A popular anecdote concerns the elephant's smile. To find out why it is smiling, legend has it, the viewer must examine the rear end of the animal and notice that its muscles are tensed and its tail is shifted to the left as if it were defecating. The animal's rear is pointed directly at one of the headquarters of the Dominican Order, housing the offices of its Inquisitors as well as the office of Father Giuseppe Paglia, a Dominican friar who was one of the main antagonists of Bernini, as a final salute and last word. Among his minor commissions for non-Roman patrons or venues, in 1677 Bernini worked along with Ercole Ferrata to create a fountain for the Lisbon palace of the Portuguese nobleman, the Count of Ericeira: copying his earlier fountains, Bernini supplied the design of the fountain sculpted by Ferrata, featuring Neptune with four tritons around a basin. The fountain has survived and since 1945 has been outside the precincts of the gardens of the Palacio Nacional de Queluz, several miles outside of Lisbon. Bernini would have studied painting as a normal part of his artistic training begun in early adolescence under the guidance of his father, Pietro, in addition to some further training in the studio of the Florentine painter, Cigoli. His earliest activity as a painter was probably no more than a sporadic diversion practiced mainly in his youth, until the mid-1620s, that is, the beginning of the pontificate of Pope Urban VIII (reigned 1623-1644) who ordered Bernini to study painting in greater earnest because the pontiff wanted him to decorate the Benediction Loggia of St. Peter's. The latter commission was never executed most likely because the required large-scale narrative compositions were simply beyond Bernini's ability as a painter. According to his early biographers, Baldinucci and Domenico Bernini, Bernini completed at least 150 canvases, mostly in the decades of the 1620s and 30s, but currently there are no more than 35-40 surviving paintings that can be confidently attributed to his hand. The extant, securely attributed works are mostly portraits, seen close up and set against an empty background, employing a confident, indeed brilliant, painterly brushstroke (similar to that of his Spanish contemporary Velasquez), free from any trace of pedantry, and a very limited palette of mostly warm, subdued colors with deep chiaroscuro. His work was immediately sought after by major collectors. Most noteworthy among these extant works are several, vividly penetrating self portraits, especially that in the Uffizi Gallery, Florence, purchased during Bernini's lifetime by Cardinal Leopoldo de' Medici. The only canvas that is securely dated is that of the Apostles Andrew and Thomas in London's National Gallery. As for Bernini's drawings, about 300 still exist; but this is a minuscule percentage of the drawings he would have created in his lifetime; these include rapid sketches relating to major sculptural or architectural commissions, presentation drawings given as gifts to his patrons and aristocratic friends, and exquisite, fully finished portraits, such as those of Agostino Mascardi (Ecole des Beaux-Arts, Paris) and Scipione Borghese and Sisinio Poli (both in New York's Morgan Library). Among the many sculptors who worked under his supervision (even though most were accomplished masters in their own right) were Luigi Bernini, Stefano Speranza, Giuliano Finelli, Andrea Bolgi, Giacomo Antonio Fancelli, Lazzaro Morelli, Francesco Baratta, Ercole Ferrata, the Frenchman Niccolò Sale, Giovanni Antonio Mari, Antonio Raggi, and Francois Duquesnoy. But his most trusted right-hand man in sculpture was Giulio Cartari, while in architecture it was Mattia de' Rossi, both of whom traveled to Paris with Bernini to assist him in his work there for King Louis XIV. Other architect disciples include Giovanni Battista Contini and Carlo Fontana while Swedish architect, Nicodemus Tessin the Younger, who visited Rome twice after Bernini's death, was also much influenced by him. Among his rivals in architecture were, above all, Francesco Borromini and Pietro da Cortona. Early in their careers they had all worked at the same time at the Palazzo Barberini, initially under Carlo Maderno and, following his death, under Bernini. Later on, however, they were in competition for commissions, and fierce rivalries developed, particularly between Bernini and Borromini. In sculpture, Bernini competed with Alessandro Algardi and Francois Duquesnoy, but they both died decades earlier than Bernini (respectively in 1654 and 1643), leaving Bernini effectively with no sculptor of his same exalted status in Rome. Francesco Mochi can also be included among Bernini's significant rivals, though he was not as accomplished in his art as Bernini, Algardi or Duquesnoy. There was also a succession of painters (the so-called 'pittori berniniani') who, working under the master's close guidance and at times according to his designs, produced canvases and frescos that were integral components of Bernini's larger multi-media works such as churches and chapels: Carlo Pellegrini, Guido Ubaldo Abbatini, Frenchman Guillaume Courtois (Guglielmo Cortese, known as 'Il Borgognone'), Ludovico Gimignani, and Giovanni Battista Gaulli (who, thanks to Bernini, was granted the prized commission to fresco the vault of the Jesuit mother church of the Gesù by Bernini's friend, Jesuit Superior General, Gian Paolo Oliva). As far as Caravaggio is concerned, in all the voluminous Bernini sources, his name appears only once, in the Chantelou Diary which records Bernini's disparaging remark about him (specifically his "Fortune Teller" that had just arrived from Italy as a Pamphilj gift to King Louis XIV). However, how much Bernini really scorned Caravaggio's art is a matter of debate whereas arguments have been made in favor of a strong influence of Caravaggio on Bernini. Bernini would of course have heard much about Caravaggio and seen many of his works not only because in Rome at the time such contact was impossible to avoid, but also because during his own lifetime Caravaggio had come to the favorable attention of Bernini's own early patrons, both the Borghese and the Barberini. Indeed, much like Caravaggio, Bernini used a theatrical-like light as an important aesthetic and metaphorical device in his religious settings, often using hidden light sources that could intensify the focus of religious worship or enhance the dramatic moment of a sculptural narrative. The most important primary source for the life of Bernini is the biography written by his youngest son, Domenico, entitled "Vita del Cavalier Gio. Lorenzo Bernino," published in 1713 though first compiled in the last years of his father's life (c. 1675–80). Filippo Baldinucci's "Life of Bernini," was published in 1682, and a meticulous private journal, the "Diary of the Cavaliere Bernini's Visit to France," was kept by the Frenchman Paul Fréart de Chantelou during the artist's four-month stay from June through October 1665 at the court of King Louis XIV. Also, there is a short biographical narrative, "The Vita Brevis of Gian Lorenzo Bernini", written by his eldest son, Monsignor Pietro Filippo Bernini, in the mid-1670s. Until the late 20th century, it was generally believed that two years after Bernini's death, Queen Christina of Sweden, then living in Rome, commissioned Filippo Baldinucci to write his biography, which was published in Florence in 1682. However, recent research now strongly suggests that it was in fact Bernini's sons (and specifically the eldest son, Mons. Pietro Filippo) who commissioned the biography from Baldinucci sometime in the late 1670s, with the intent of publishing it while their father was still alive. This would mean that first, the commission did not at all originate in Queen Christina who would have merely lent her name as patron (in order to hide the fact that the biography was coming directly from the family) and secondly, that Baldinucci's narrative was largely derived from some pre-publication version of Domenico Bernini's much longer biography of his father, as evidenced by the extremely large amount of text repeated verbatim (there is no other explanation, otherwise, for the massive amount of verbatim repetition, and it is known that Baldinucci routinely copied verbatim material for his artists' biographies supplied by family and friends of his subjects). As the most detailed account and the only one coming directly from a member of the artist's immediate family, Domenico's biography, despite having been published later than Baldinucci's, therefore represents the earliest and more important full-length biographical source of Bernini's life, even though it idealizes its subject and whitewashes a number of less-than-flattering facts about his life and personality. As one Bernini scholar has summarized, "Perhaps the most important result of all of the [Bernini] studies and research of these past few decades has been to restore to Bernini his status as the great, principal protagonist of Baroque art, the one who was able to create undisputed masterpieces, to interpret in an original and genial fashion the new spiritual sensibilities of the age, to give the city of Rome an entirely new face, and to unify the [artistic] language of the times." Few artists have had as decisive an influence on the physical appearance and emotional tenor of a city as Bernini had on Rome. Maintaining a controlling influence over all aspects of his many and large commissions and over those who aided him in executing them, he was able to carry out his unique and harmoniously uniform vision over decades of work with his long and productive life. Although by the end of Bernini's life there was in motion a decided reaction against his brand of flamboyant Baroque, the fact is that sculptors and architects continued to study his works and be influenced by them for several more decades (Nicola Salvi's later Trevi Fountain [inaugurated in 1735] is a prime example of the enduring post-mortem influence of Bernini on the city's landscape). In the eighteenth century Bernini and virtually all Baroque artists fell from favor in the neoclassical criticism of the Baroque, that criticism aimed above all on the latter's supposedly extravagant (and thus illegitimate) departures from the pristine, sober models of Greek and Roman antiquity. It is only from the late nineteenth century that art historical scholarship, in seeking a more objective understanding of artistic output within the specific cultural context in which it was produced, without the a priori prejudices of neoclassicism, began to recognize Bernini's achievements and slowly began restore his artistic reputation. However, the reaction against Bernini and the too-sensual (and therefore "decadent"), too emotionally charged Baroque in the larger culture (especially in non-Catholic countries of northern Europe, and particularly in Victorian England) remained in effect until well into the twentieth century (most notable are the public disparagement of Bernini by Francesco Milizia, Joshua Reynolds, and Jacob Burkhardt). Most of the popular eighteenth- and nineteenth-century tourist's guides to Rome all but ignore Bernini and his work, or treat it with disdain, as in the case of the best-selling "Walks in Rome" (22 editions between 1871 and 1925) by Augustus J.C. Hare, who describes the angels on the Ponte Sant'Angelo as 'Bernini's Breezy Maniacs.' But now in the twenty-first century, Bernini and his Baroque have now been enthusiastically restored to favor, both critical and popular. Since the anniversary year of his birth in 1998, there have been numerous Bernini exhibitions throughout the world, especially Europe and North America, on all aspects of his work, expanding our knowledge of his work and its influence. In the late twentieth century, Bernini was commemorated on the front of the (before Italy switched to the euro) with the back showing his equestrian statue of Constantine. Another outstanding sign of Bernini's enduring reputation came in the decision by architect I.M. Pei to insert a faithful copy in lead of his King Louis XIV Equestrian statue as the sole ornamental element in his massive modernist redesign of the entrance plaza to the Louvre Museum, completed to great acclaim in 1989, and featuring the giant Louvre Pyramid in glass. In 2000 best-selling novelist, Dan Brown, made Bernini and several of his Roman works, the centerpiece of his political thriller, "Angels & Demons", while British novelist Iain Pears made a missing Bernini bust the centerpiece of his best-selling murder mystery, "The Bernini Bust" (2003).
https://en.wikipedia.org/wiki?curid=12635
German literature German literature comprises those literary texts written in the German language. This includes literature written in Germany, Austria, the German parts of Switzerland and Belgium, Liechtenstein, South Tyrol in Italy and to a lesser extent works of the German diaspora. German literature of the modern period is mostly in Standard German, but there are some currents of literature influenced to a greater or lesser degree by dialects (e.g. Alemannic). Medieval German literature is literature written in Germany, stretching from the Carolingian dynasty; various dates have been given for the end of the German literary Middle Ages, the Reformation (1517) being the last possible cut-off point. The Old High German period is reckoned to run until about the mid-11th century; the most famous works are the "Hildebrandslied" and a heroic epic known as the "Heliand". Middle High German starts in the 12th century; the key works include "The Ring" (ca. 1410) and the poems of Oswald von Wolkenstein and Johannes von Tepl. The Baroque period (1600 to 1720) was one of the most fertile times in German literature. Modern literature in German begins with the authors of the Enlightenment (such as Herder). The Sensibility movement of the 1750s–1770s ended with Goethe's best-selling "Die Leiden des jungen Werther" (1774). The Sturm und Drang and Weimar Classicism movements were led by Johann Wolfgang von Goethe and Friedrich Schiller. German Romanticism was the dominant movement of the late 18th and early 19th centuries. Biedermeier refers to the literature, music, the visual arts and interior design in the period between the years 1815 (Vienna Congress), the end of the Napoleonic Wars, and 1848, the year of the European revolutions. Under the Nazi regime, some authors went into exile ("Exilliteratur") and others submitted to censorship ("internal emigration", "Innere Emigration"). The Nobel Prize in Literature has been awarded to German language authors thirteen times (as of 2009), or the third most often after English and French language authors (with 27 and 14 laureates, respectively), with winners including Thomas Mann, Hermann Hesse, and Günter Grass. Periodization is not an exact science but the following list contains movements or time periods typically used in discussing German literature. It seems worth noting that the periods of medieval German literature span two or three centuries, those of early modern German literature span one century, and those of modern German literature each span one or two decades. The closer one nears the present, the more debated the periodizations become. Medieval German literature refers to literature written in Germany, stretching from the Carolingian dynasty; various dates have been given for the end of the German literary Middle Ages, the Reformation (1517) being the last possible cut-off point. The Old High German period is reckoned to run until about the mid-11th century, though the boundary to Early Middle High German (second half of the 11th century) is not clear-cut. The most famous work in OHG is the "Hildebrandslied", a short piece of Germanic alliterative heroic verse which besides the "Muspilli" is the sole survivor of what must have been a vast oral tradition. Another important work, in the northern dialect of Old Saxon, is a life of Christ in the style of a heroic epic known as the "Heliand". Middle High German proper runs from the beginning of the 12th century, and in the second half of the 12th century, there was a sudden intensification of activity, leading to a 60-year "golden age" of medieval German literature referred to as the "mittelhochdeutsche Blütezeit" (1170–1230). This was the period of the blossoming of MHG lyric poetry, particularly "Minnesang" (the German variety of the originally French tradition of courtly love). One of the most important of these poets was Walther von der Vogelweide. The same sixty years saw the composition of the most important courtly romances. These are written in rhyming couplets, and again draw on French models such as Chrétien de Troyes, many of them relating Arthurian material, for example, "Parzival" by Wolfram von Eschenbach. The third literary movement of these years was a new revamping of the heroic tradition, in which the ancient Germanic oral tradition can still be discerned, but tamed and Christianized and adapted for the court. These high medieval heroic epics are written in rhymed strophes, not the alliterative verse of Germanic prehistory (for example, the "Nibelungenlied"). The Middle High German period is conventionally taken to end in 1350, while the Early New High German is taken to begin with the German Renaissance, after the invention of movable type in the mid-15th century. Therefore, the literature of the late 14th and the early 15th century falls, as it were, in the cracks between Middle and New High German, and can be classified as either. Works of this transitional period include "The Ring" (c. 1410), the poems of Oswald von Wolkenstein and Johannes von Tepl, the German versions of "Pontus and Sidonia", and arguably the works of Hans Folz and Sebastian Brant ("Ship of Fools", 1494), among others. The "Volksbuch" (chapbook) tradition which would flourish in the 16th century also finds its origin in the second half of the 15th century. The Baroque period (1600 to 1720) was one of the most fertile times in German literature. Many writers reflected the horrible experiences of the Thirty Years' War, in poetry and prose. Grimmelshausen's adventures of the young and naïve Simplicissimus, in the eponymous book "Simplicius Simplicissimus", became the most famous novel of the Baroque period. Martin Opitz established rules for the "purity" of language, style, verse and rhyme. Andreas Gryphius and Daniel Caspar von Lohenstein wrote German language tragedies, or "Trauerspiele", often on Classical themes and frequently quite violent. Erotic, religious and occasional poetry appeared in both German and Latin. Sibylle Ursula von Braunschweig-Lüneburg wrote part of a novel, "Die Durchlauchtige Syrerin Aramena" ("Aramena, the noble Syrian lady"), which when complete would be the most famous courtly novel in German Baroque literature; it was finished by her brother Anton Ulrich and edited by Sigmund von Birken. "Empfindsamkeit" / Sensibility (1750s–1770s) Friedrich Gottlieb Klopstock (1724–1803), Christian Fürchtegott Gellert (1715–1769), Sophie de La Roche (1730–1807). The period culminates and ends in Goethe's best-selling "Die Leiden des jungen Werther" (1774). Sturm und Drang (the conventional translation is "Storm and Stress"; a more literal translation, however, might be "storm and urge", "storm and longing", or "storm and impulse") is the name of a movement in German literature and music taking place from the late 1760s through the early 1780s in which individual subjectivity and, in particular, extremes of emotion were given free expression in response to the confines of rationalism imposed by the Enlightenment and associated aesthetic movements. The philosopher Johann Georg Hamann is considered to be the ideologue of "Sturm und Drang", and Johann Wolfgang von Goethe was a notable proponent of the movement, though he and Friedrich Schiller ended their period of association with it, initiating what would become Weimar Classicism. Weimar Classicism (German “"Weimarer Klassik"” and “"Weimarer Klassizismus"”) is a cultural and literary movement of Europe, and its central ideas were originally propounded by Johann Wolfgang von Goethe and Johann Christoph Friedrich von Schiller during the period 1786 to 1805. German Romanticism was the dominant movement of the late 18th and early 19th centuries. German Romanticism developed relatively late compared to its English counterpart, coinciding in its early years with the movement known as German Classicism or Weimar Classicism, which it opposed. In contrast to the seriousness of English Romanticism, the German variety is notable for valuing humor and wit as well as beauty. The early German romantics tried to create a new synthesis of art, philosophy, and science, looking to the Middle Ages as a simpler, more integrated period. As time went on, however, they became increasingly aware of the tenuousness of the unity they were seeking. Later German Romanticism emphasized the tension between the everyday world and the seemingly irrational and supernatural projections of creative genius. Heinrich Heine in particular criticized the tendency of the early romantics to look to the medieval past for a model of unity in art and society. Biedermeier refers to work in the fields of literature, music, the visual arts and interior design in the period between the years 1815 (Vienna Congress), the end of the Napoleonic Wars, and 1848, the year of the European revolutions and contrasts with the Romantic era which preceded it. Typical Biedermeier poets are Annette von Droste-Hülshoff, Adelbert von Chamisso, Eduard Mörike, and Wilhelm Müller, the last three named having well-known musical settings by Robert Schumann, Hugo Wolf and Franz Schubert respectively. Young Germany ("Junges Deutschland") was a loose group of Vormärz writers which existed from about 1830 to 1850. It was essentially a youth movement (similar to those that had swept France and Ireland and originated in Italy). Its main proponents were Karl Gutzkow, Heinrich Laube, Theodor Mundt and Ludolf Wienbarg; Heinrich Heine, Ludwig Börne and Georg Büchner were also considered part of the movement. The wider circle included Willibald Alexis, Adolf Glassbrenner and Gustav Kühne. Poetic Realism (1848–1890): Theodor Fontane, Gustav Freitag, Gottfried Keller, Wilhelm Raabe, Adalbert Stifter, Theodor Storm Naturalism (1880–1900): Gerhart Hauptmann A well-known writer of German Literature was Franz Kafka. Kafka's novel, The Trial, was ranked #3 on Le Monde's 100 Books of the Century. Kafka instills a macabre sensation in his writing, so much so, that his writing style was coined to be “Kafkaesque.” Kafka's writing allowed a peek into his melancholic life, one where he felt isolated from all human beings, one of his inspirations for writing. Under the Nazi regime, some authors went into exile ("Exilliteratur") and others submitted to censorship ("inner emigration", "Innere Emigration") Much of contemporary poetry in the German language is published in literary magazines. DAS GEDICHT, for instance, has featured German poetry from Germany, Austria, Switzerland, and Luxemburg for the last twenty years. The Nobel Prize in Literature has been awarded to German-language authors thirteen times (as of 2009), or the third most often after English- and French-language authors (with 27 and 14 laureates, respectively). The following writers are from Germany unless stated otherwise: English German Anthologies
https://en.wikipedia.org/wiki?curid=12636
Galilee Galilee (, "HaGalil"; ) is a region mainly located in northern Israel. The term "Galilee" traditionally refers to the mountainous part, divided into Upper Galilee () and Lower Galilee (). In modern, common usage, as well at different times in history, "Galilee" referred and refers to all of the area that is north of the Mount Carmel-Mount Gilboa ridge. In this sense, it extends from the Israeli coastal plain and the shores of the Mediterranean Sea with Acre in the west, to the Jordan Rift Valley to the east; and from the Lebanese border in the north plus a piece bordering on the Golan Heights all the way to Dan at the base of Mount Hermon in the northeast, to Mount Carmel and Mount Gilboa in the south. This definition includes the plains of the Jezreel Valley north of Jenin and the Beth Shean Valley, the valley containing the Sea of Galilee, and the Hula Valley, although it usually does not include Haifa's immediate northern suburbs. By this definition it overlaps with much of the administrative Northern District of the country (which also includes the Golan Heights and part of Menashe Heights, but not Qiryat Tiv'on). Western Galilee () is a common term referring to the western part of the Upper Galilee and its shore, and usually also the northwestern part of the Lower Galilee, mostly overlapping with Acre sub-district. Galilee Panhandle is a common term referring to the "panhandle" in the east that extends to the north, where Lebanon is to the west, and includes Hula Valley and Ramot Naftali mountains of the Upper Galilee. Historically, the part of Southern Lebanon south of the east-west section of the Litani River also belonged to the region of Galilee, but the present article mainly deals with the Israeli part of the region. The region's Israelite name is from the Hebrew root גָּלִיל (), an ultimately unique word for 'district', and usually 'cylinder'. The Hebrew form used in Isaiah 8:23 (or 9:1 in different Biblical versions) is in the construct state, (), meaning 'Galilee of the nations', i.e. the part of Galilee inhabited by Gentiles at the time that the book was written. The region in turn gave rise to the English name for the "Sea of Galilee" referred to as such in many languages including ancient Arabic. In the Hebrew language, the lake is referred to as (Numbers 34:11, etc.), from Hebrew ('harp', describing its shape); Lake of Gennesaret (Luke 5:1, etc.), from , from ('valley') and either ('branch') or ('to guard', 'to watch'), which may have been a reference to Nazareth town; and alternatively named the Sea of Tiberias (John 6:1, etc.), from the town of Tiberias at its southwestern end, itself named after the first-century CE Roman emperor Tiberius. These are the three names used in originally internal Jewish-authored literature rather than the "Sea of Galilee". However, Jews did use "the Galilee" to refer to the whole region (Aramaic הגלילי), including its lake. Most of Galilee consists of rocky terrain, at heights of between 500 and 700 m. Several high mountains are in the region, including Mount Tabor and Mount Meron, which have relatively low temperatures and high rainfall. As a result of this climate, flora and fauna thrive in the region, while many birds annually migrate from colder climates to Africa and back through the Hula–Jordan corridor. The streams and waterfalls, the latter mainly in Upper Galilee, along with vast fields of greenery and colourful wildflowers, as well as numerous towns of biblical importance, make the region a popular tourist destination. Due to its high rainfall –, mild temperatures and high mountains (Mount Meron's elevation is 1,000–1,208 m), the upper Galilee region contains some distinctive flora and fauna: prickly juniper ("Juniperus oxycedrus"), Lebanese cedar ("Cedrus libani"), which grows in a small grove on Mount Meron, cyclamens, paeonias, and "Rhododendron ponticum" which sometimes appears on Meron. According to the Bible, Galilee was named by the Israelites and was the tribal region of Naphthali and Dan, at times overlapping the Tribe of Asher's land. However, Dan was dispersed among the whole people rather than isolated to the lands of Dan, as the Tribe of Dan was the hereditary local law enforcement and judiciary for the whole nation. Normally, Galilee is just referred to as Naphthali. Chapter 9 of 1 Kings states that Solomon rewarded his Phoenician ally, King Hiram I of Sidon, with twenty cities in the land of Galilee, which would then have been either settled by foreigners during and after the reign of Hiram, or by those who had been forcibly deported there by later conquerors such as the Assyrians. Hiram, to reciprocate previous gifts given to David, accepted the upland plain among the mountains of Naphtali and renamed it "the land of Cabul" for a time. During the expansion under the Hasmonean dynasty much of the Galilee region was conquered and annexed by the first Hasmonean King of Judaea Aristobulus I (104 - 103 BCE). Galilee in the first century was dotted with small towns and villages. The Jewish historian Josephus claims that there were 204 small towns in Galilee, but modern scholars believe this estimate to be an exaggeration. Many of these towns were located around the Sea of Galilee, which contained many edible fish and which was surrounded by fertile land. Salted, dried, and pickled fish were an important export good. In 4 BCE, a rebel named Judah plundered Galilee's largest city, Sepphoris. In response, the Syrian governor Publius Quinctilius Varus sacked Sepphoris and sold the population into slavery. After the death of Herod the Great that same year, the Roman emperor Augustus appointed his son Herod Antipas as tetrarch of Galilee, which remained a Roman client state. Antipas paid tribute to the Roman Empire in exchange for Roman protection. The Romans did not station troops in Galilee, but threatened to retaliate against anyone who attacked it. As long as he continued to pay tribute, Antipas was permitted to govern however he wished and was permitted to mint his own coinage. Antipas was relatively observant of Jewish laws and customs. Although his palace was decorated with animal carvings, which many Jews regarded as a transgression against the law prohibiting idols, his coins bore only agricultural designs, which his subjects deemed acceptable. In general, Antipas was a capable ruler; Josephus does not record any instance of him using force to put down an uprising and he had a long, prosperous reign. However, many Jews probably resented him as not sufficiently devout. Antipas rebuilt the city of Sepphoris and, in either 18 CE or 19 CE, he founded the new city of Tiberias. These two cities became Galilee's largest cultural centers. They were the main centers of Greco-Roman influence, but were still predominantly Jewish. A massive gap existed between the rich and poor, but lack of uprisings suggest that taxes were not exorbitantly high and that most Galileans did not feel their livelihoods were being threatened. The archaeological discoveries of synagogues from the Hellenistic and Roman period in the Galilee show strong Phoenician influences, and a high level of tolerance for other cultures relative to other Jewish religious centers. Late in his reign, Antipas married his half-niece Herodias, who was already married to one of her other uncles. His wife, whom he divorced, fled to her father Aretas, an Arab king, who invaded Galilee and defeated Antipas's troops before withdrawing. Both Josephus and the Gospel of Mark record that the itinerate preacher John the Baptist criticized Antipas over his marriage and Antipas consequently had him imprisoned and then beheaded. In around 39 CE, at the urging of Herodias, Antipas went to Rome to request that he be elevated from the status of tetrarch to the status of king. The Romans found him guilty of storing arms, so he was removed from power and exiled, ending his forty-three-year reign. During the Great Revolt (66–73 CE), a Jewish mob destroyed Herod Antipas's palace. According to medieval Hebrew legend, Simeon bar Yochai, one of the most famed of all the Tannaim, wrote the Zohar while living in Galilee. Eastern Galilee retained a Jewish majority until at least the seventh century. After the Muslim conquest of the Levant in the 630s, the Galilee formed part of Jund al-Urdunn (the military district of Jordan), itself part of Bilad al-Sham (Islamic Syria). Its major towns were Tiberias the capital of the district, Qadas, Beisan, Acre, Saffuriya, and Kabul. The Shia Fatimids conquered the region in the 10th century; a breakaway sect, venerating the Fatimid caliph al-Hakim, formed the Druze religion, centered in Mount Lebanon and partially in the Galilee. During the Crusades, Galilee was organized into the Principality of Galilee, one of the most important Crusader seigneuries. During Early Ottoman era, the Galilee was governed as the Safad Sanjak, initially part of the larger administrative unit of Damascus Eyalet (1549–1660) and later as part of Sidon Eyalet (1660–1864). During the 18th century, the administrative division of Galilee was renamed to Acre Sanjak, and the Eyalet itself became centered in Acre, factually becoming the Acre Eyalet between 1775 and 1841. The Jewish population of Galilee increased significantly following their expulsion from Spain and welcome from the Ottoman Empire. The community for a time made Safed an international center of cloth weaving and manufacturing, as well as a key site for Jewish learning. Today it remains one of Judaism's four holy cities and a center for kabbalah. In the mid-17th century Galilee and Mount Lebanon became the scene of the Druze power struggle, which came in parallel with much destruction in the region and decline of major cities. In the mid-18th century, Galilee was caught up in a struggle between the Arab leader Zahir al-Umar and the Ottoman authorities who were centred in Damascus. Zahir ruled Galilee for 25 years until Ottoman loyalist Jezzar Pasha conquered the region in 1775. In 1831, the Galilee, a part of Ottoman Syria, switched hands from Ottomans to Ibrahim Pasha of Egypt until 1840. During this period, aggressive social and politic policies were introduced, which led to a violent 1834 Arab revolt. In the process of this revolt the Jewish community of Safed was greatly reduced, in the event of Safed Plunder by the rebels. The Arab rebels were subsequently defeated by the Egyptian troops, though in 1838, the Druze of Galilee led another uprising. In 1834 and 1837, major earthquakes leveled most of the towns, resulting in great loss of life. Following the 1864 Tanszimat reforms in the Ottoman Empire, the Galilee remained within Acre Sanjak, but was transferred from Sidon Eyalet to the newly formed Syria Vilayet and shortly, from 1888, became administered from Beirut Vilayet. In 1866, Galilee's first hospital, the Nazareth Hospital, was founded under the leadership of American-Armenian missionary Dr. Kaloost Vartan, assisted by German missionary John Zeller. In the early 20th century, Galilee remained part of Acre Sanjak of Ottoman Syria. It was administered as the southernmost territory of the Beirut Vilayet. Following the defeat of the Ottoman Empire in World War I, and the Armistice of Mudros, it came under British rule, as part of the Occupied Enemy Territory Administration. Shortly after, in 1920, the region was included in the British Mandate territory, officially a part of Mandatory Palestine from 1923. After the 1948 Arab–Israeli war, nearly the whole of Galilee came under Israel's control. A large portion of the population fled or was forced to leave, leaving dozens of entire villages empty; however, a large Israeli Arab community remained based in and near the cities of Nazareth, Acre, Tamra, Sakhnin, and Shefa-'Amr, due to some extent to a successful rapprochement with the Druze. The kibbutzim around the Sea of Galilee were sometimes shelled by the Syrian army's artillery until Israel seized Western Golan Heights in the 1967 Six-Day War. During the 1970s and the early 1980s, the Palestine Liberation Organization (PLO) launched multiple attacks on towns and villages of the Upper and Western Galilee from Lebanon. This came in parallel to the general destabilization of Southern Lebanon, which became a scene of fierce sectarian fighting which deteriorated into the Lebanese Civil War. On the course of the war, Israel initiated Operation Litani (1979) and Operation Peace For Galilee (1982) with the stated objectives of destroying the PLO infrastructure in Lebanon, protecting the citizens of the Galilee and supporting allied Christian Lebanese militias. Israel took over much of southern Lebanon in support of Christian Lebanese militias until 1985, when it withdrew to a narrow security buffer zone. From 1985 to 2000, Hezbollah, and earlier Amal, engaged the South Lebanon Army supported by the Israel Defense Forces, sometimes shelling Upper Galilee communities with Katyusha rockets. In May 2000, Israeli prime minister Ehud Barak unilaterally withdrew IDF troops from southern Lebanon, maintaining a security force on the Israeli side of the international border recognized by the United Nations. The move brought a collapse to the South Lebanon Army and takeover of Southern Lebanon by Hezbollah. However, despite Israeli withdrawal, clashes between Hezbollah and Israel continued along the border, and UN observers condemned both for their attacks. The 2006 Israel-Lebanon conflict was characterized by round-the-clock Katyusha rocket attacks (with a greatly extended range) by Hezbollah on the whole of Galilee, with long-range, ground-launched missiles hitting as far south as the Sharon Plain, Jezreel Valley, and Jordan Valley below the Sea of Galilee. , there were 1.2 million residents in Galilee, of which 47% were Jewish. The Jewish Agency has attempted to increase the Jewish population in this area, but the non-Jewish population also has a high growth rate. The largest cities in the region are Acre, Nahariya, Nazareth, Safed, Karmiel, Shaghur, Shefa-'Amr, Afula, and Tiberias. The port city of Haifa serves as a commercial center for the whole region. Because of its hilly terrain, most of the people in the Galilee live in small villages connected by relatively few roads. A railroad runs south from Nahariya along the Mediterranean coast, and a fork to the east is due to operate in 2015. The main sources of livelihood throughout the area are agriculture and tourism. Industrial parks are being developed, bringing further employment opportunities to the local population which includes many recent immigrants. The Israeli government is contributing funding to the private initiative, the Galilee Finance Facility, organised by the Milken Institute and Koret Economic Development Fund. The Galilee is home to a large Arab population, comprising a Muslim majority and two smaller populations, of Druze and Arab Christians, of comparable sizes. Both Israeli Druze and Christians have their majorities in the Galilee. Other notable minorities are the Bedouin, the Maronites and the Circassians. The north-central portion of the Galilee is also known as Central Galilee, stretching from the border with Lebanon to the northern edge of the Jezreel Valley, including the cities of Nazareth and Sakhnin, has an Arab majority of 75% with most of the Jewish population living in hilltop cities like Upper Nazareth. The northern half of the central Lower Galilee, surrounding Karmiel and Sakhnin is known as the "Heart of the Galilee". The eastern Galilee is nearly 100% Jewish. This part includes the Finger of the Galilee, the Jordan River Valley, and the shores the Sea of Galilee, and contains two of Judaism's Four Holy Cities. The southern part of the Galilee, including Jezreel Valley, and the Gilboa region are also nearly 100% Jewish, with a few small Arab villages near the West Bank border. About 80% of the population of the Western Galilee is Jewish, all the way up to the Lebanese border. Jews also form a small majority in the mountainous Upper Galilee with a significant minority Arab population (mainly Druze and Christians). As of 2011, the Galilee is attracting significant internal migration of Haredi Jews, who are increasingly moving to the Galilee and Negev as an answer to rising housing prices in central Israel. Galilee is a popular destination for domestic and foreign tourists who enjoy its scenic, recreational, and gastronomic offerings. The Galilee attracts many Christian pilgrims, as many of the miracles of Jesus occurred, according to the New Testament, on the shores of the Sea of Galilee—including his walking on water, calming the storm, and feeding five thousand people in Tabgha. In addition, numerous sites of biblical importance are located in the Galilee, such as Megiddo, Jezreel Valley, Mount Tabor, Hazor, Horns of Hattin, and more. A popular hiking trail known as the "yam leyam", or sea-to-sea, starts hikers at the Mediterranean. They then hike through the Galilee mountains, Tabor, Neria, and Meron, until their final destination, the Kinneret (Sea of Galilee). In April 2011, Israel unveiled the "Jesus Trail", a 40-mile (60-km) hiking trail in the Galilee for Christian pilgrims. The trail includes a network of footpaths, roads, and bicycle paths linking sites central to the lives of Jesus and his disciples, including Tabgha, the traditional site of Jesus' miracle of the loaves and fishes, and the Mount of Beatitudes, where he delivered his Sermon on the Mount. It ends at Capernaum on the shores of the Sea of Galilee, where Jesus espoused his teachings. Many kibbutzim and moshav families operate "Zimmern" (German: "rooms", the local term for a Bed and breakfasts). Numerous festivals are held throughout the year, especially in the autumn and spring holiday seasons. These include the Acre (Acco) Festival of Alternative Theater, the olive harvest festival, music festivals featuring Anglo-American folk, klezmer, Renaissance, and chamber music, and Karmiel Dance Festival. The cuisine of the Galilee is very diverse. The meals are lighter than in the central and southern regions. Dairy products are heavily consumed (especially the Safed cheese that originated in the mountains of the Upper Galilee). Herbs like thyme, mint, parsley, basil, and rosemary are very common with everything including dips, meat, fish, stews and cheese. In the eastern part of the Galilee, there is freshwater fish as much as meat (especially the Tilapia that lives in the Sea of Galilee, Jordan river, and other streams in the region), including fish filled with thyme and grilled with rosemary to flavor, or stuffed with oregano leaves, then topped with parsley and served with lemon to squash. This technique exists in other parts of the country including the coasts of the Mediterranean and the Red Sea. A specialty of the region is a baked Tilapia flavored with celery, mint and a lot of lemon juice. Baked fish with tahini is also common in Tiberias while the coastal Galileans prefer to replace the tahini with yogurt and add sumac on top. The Galilee is famous for its olives, pomegranates, wine and especially its Labneh w'Za'atar which is served with pita bread, meat stews with wine, pomegranates and herbs such as akub, parsley, khalmit, mint, fennel, etc. are common. Galilean kubba is usually flavored with cumin, cinnamon, cardamom, concentrated pomegranate juice, onion, parsley and pine nuts and served as meze with tahini dip. Kebabs are also made almost in the same way with sumac replacing cardamom and with carob sometimes replacing the pomegranate juice. Because of its climate, beef has become more popular than lamb, although both are still eaten there. Dates are popular in the tropical climate of the Eastern Galilee. The Galilee is often divided into these subregions, which often overlap:
https://en.wikipedia.org/wiki?curid=12639
Goths The Goths (; ) were a Germanic people who played a major role in the fall of the Western Roman Empire and the emergence of Medieval Europe. They were first definitely reported by Graeco-Roman authors in the 3rd century AD, living north of the Danube in what is now Ukraine, Moldova and Romania. Later, many moved into the Roman Empire, or settled west of the Carpathians near what is now Hungary. A people called the "Gutones"—possibly early Goths—are documented living near the lower Vistula River in the 1st century, where they are associated with the archaeological Wielbark culture. In his book "Getica", the Gothic historian Jordanes claimed that the Goths originated in southern Scandinavia more than 1000 years earlier, but his reliability is disputed. The Wielbark culture expanded southwards towards the Black Sea, where by the late 3rd century it contributed to the formation of the Chernyakhov culture, which is associated with the Goths who were in frequent conflict and contact with the Roman Empire. By the 4th century at the latest, several groups were distinguishable, among whom the Thervingi and Greuthungi were the most powerful. During this time, Ulfilas began the conversion of Goths to Arianism. In the late 4th century, the lands of the Goths were invaded from the east by the Huns. In the aftermath of this event, several groups of Goths came under Hunnic domination, while others migrated further west or sought refuge inside the Roman Empire. Goths who entered the empire by crossing the Danube inflicted a devastating defeat upon the Romans at the Battle of Adrianople in 378. These Goths would form the Visigoths, and under their king Alaric I they began a long migration, eventually establishing a Visigothic Kingdom in Spain at Toledo. Meanwhile, Goths under Hunnic rule gained their independence in the 5th century, most importantly the Ostrogoths. Under their king Theodoric the Great, these Goths established an Ostrogothic Kingdom in Italy at Ravenna. The Ostrogothic Kingdom was destroyed by the Eastern Roman Empire in the 6th century, while the Visigothic Kingdom was conquered by the Umayyad Caliphate in the early 8th century. Remnants of Gothic communities in the Crimea, known as the Crimean Goths, lingered on for several centuries, although Goths would eventually cease to exist as a distinct people. In the Gothic language, the Goths were called the *"Gut-þiuda" "Gothic people" or *"Gutans" (Goth). The Proto-Germanic form of the Gothic name is *"Gutaniz". This form is identical to that of the Gutes and closely related to that of the Geats. Though these names probably mean the same, their exact meaning is uncertain. They are all thought to be related to the Proto-Germanic verb *"geuta-", which means "to pour". The Goths and other East Germanic-speaking groups, such as the Vandals and Gepids, eventually came to live outside of Germania, and were thereafter never considered "Germani" by ancient Roman authors, who consistently categorized them among the "Scythians" or other peoples who had historically inhabited the area. The Goths were Germanic-speaking. They are classified as a Germanic people by modern scholars. They are today sometimes referred to as being "Germani". Roman authors first mention the Goths in the third century. Scholars do not agree about whether these Goths had an identifiable and continuous history and identity which stretched before then. However several lines of evidence exist which point to an origin near the Vistula river, and possibly also connections to Scandinavia. A crucial source on Gothic history is "Getica" by the 6th century Gothic historian Jordanes. "Getica" claims to be based on an earlier lost work by Cassiodorus, which was accordingly based upon an even earlier work by the Gothic historian Ablabius. Historians dispute the reliability of "Getica", and the extent to which "Getica" it is derived from Gothic oral traditions. According to Jordanes the Goths originated in Scandza (Scandinavia), from where they emigrated by sea under their king Berig. The authencity and accuracy of Jordanes' claim of Gothic origins in Scandianvia is disputed among historians. Similarities between the Gothic name and that of the Gutes and Geats, and accounts of emigration from the medieval Gutasaga, have been cited as evidence of an origin in Gotland or Götaland. Dissimilarities between the Gothic language and Scandinavian languages have been cited as evidence against a Scandinavian origin. The archaeological evidence for a Gothic origin in Scandinavia is unclear. Jordanes writes that the Goths under Berig settled an area near the Vistula, which they named Gothiscandza. Modern scholars generally locate Gothiscandza in the Wielbark culture of modern-day northern Poland. This culture is usually ascribed to the Goths, Rugii and other Germanic peoples. The Wielbark culture emerged in the lower Vistula and along the Pomeranian coast in the 1st century AD, replacing the preceding Oksywie culture. It is primarily distinguished from the Oksywie by the practice of inhumation, the absence of weapons in graves, and the presence of stone circles. This area had been intimately connected with Scandinavia since the time of the Nordic Bronze Age and the Lusatian culture. Jordanes writes that the Goths, soon after settling Gothiscandza, seized the lands of the Ulmerugi (Rugii) . Archaeological evidence shows an early southward expansion of the Wielbark culture along the left bank of the Vistula. This area has a particularly high concentration of stone circles and inhumation burials, and is often specifically equated with Gothiscandza. The Goths are generally believed to have been first attested by Greco-Roman sources in the 1st century under the name "Gutones". The equation between Gutones and later Goths is disputed by several historians. Around 15 AD, Strabo mentions the Butones, Lugii, Semnones and others as one of a large group of peoples who came under the domination of the Marcomannic king Maroboduus. The "Butones" are generally equated with the Gutones. The Lugii have sometimes been considered the same people as the Vandals, with whom they were certainly closely affiliated. The Vandals are associated with the Przeworsk culture, which was located to the south of the Wielbark culture. Wolfram suggests that the Gutones were clients of the Lugii and Vandals in the 1st century AD. In 77 AD, Pliny the Elder mentions the Gutones as one of the peoples of Germania. He writes that the Gutones, Burgundiones, Varini and Carini belong to the Vandili. Pliny classifies the Vandili as one of the five principal "German races", along with the coastal Ingvaeones, Istvaeones, Irminones and Peucini. In an earlier chapter Pliny writes that the 4th century BC traveler Pytheas encountered a people called the "Guiones". Some scholars have equated these "Guiones" with the Gutones, but the authencity of the Pytheas account is uncertain. In his work "Germania" from around 98 AD, Tacitus writes that the Gotones/Gothones and the neighboring Rugii and Lemovii were "Germani" who carried round shields and short swords, and lived near the Ocean, beyond the Vandals. He described them as "ruled by kings, a little more strictly than the other German tribes". In another notable work, "The Annals", Tacitus writes that the Gotones had assisted Catualda, a young Marcomannic exile, in overthrowing the rule of Maroboduus. Prior to this, it is probable that both the Gutones and Vandals had been subjects of the Marcomanni. Sometime after settling Gothiscandza, Jordanes writes that the Goths defeated the neighboring Vandals. Wolfram believes the Gutones freed themselves from Vandalic domination at the beginning of the 2nd century AD. In his work Geography from around 150 AD, Ptolemy mentions the Gutones/Gythones as living east of the Vistula in Sarmatia, between the Veneti and the Fenni. In an earlier chapter he mentions a people called the Gutae/Gautae as living in southern Scandia. These Gutae/Gautae are probably the same as the later Gauti mentioned by Procopius. Wolfram suggests that there were close relations between the Gutones/Gythones and Gutae/Gautae, and that they might have been of common origin. Beginning in the middle 2nd century, the Wielbark culture shifted southeast towards the Black Sea. During this time the Wielbark culture is believed to have ejected and partially absorbed peoples of the Przeworsk culture. This was part of a wider southward movement of eastern Germanic tribes, which was probably caused by massive population growth. As a result, other tribes were pushed towards the Roman Empire, contributing to the beginning of the Marcomannic Wars. By 200 AD, Wielbark Goths were probably being recruited into the Roman army. According to Jordanes, the Goths entered Oium, part of Scythia, under the king Filimer, where they defeated the Spali. The name "Spali" may mean "the giants" in Slavic, and the Spali were thus probably not Slavs. In the early 3rd century AD, western Scythia was inhabited by the agricultural Zarubintsy culture and the nomadic Sarmatians. Prior to the Sarmatians the area had been settled by the Bastarnae, who are believed to have carried out a migration similar to the Goths in the 3rd century BC. Peter Heather considers the Filimer story to be at least partially derived from Gothic oral tradition. The fact that the expanding Goths appear to have preserved their Gothic language during their migration, suggests that their movement involved a faily large number of people. By the mid 3rd century AD, the Wielbark culture had contributed to the formation of the Chernyakhov culture in Scythia. This strikingly uniform culture came to stretch from the Danube in the west to the Don in the east. It is believed to have been dominated by the Goths and other Germanic groups such as the Heruli. It nevertheless also included Iranian, Dacian, Roman and probably Slavic elements as well. The first incursion of the Roman Empire that can be attributed to Goths is the sack of Histria in 238. The first references to the Goths in the 3rd century call them "Scythians", as this area, known as Scythia, had historically had been occupied by an unrelated people of that name. It is in the late 3rd century that the name "Goths" () is first mentioned. Ancient authors do not identify the Goths with the earlier Gutones. Philologists and linguists have no doubt that the names are the same. On the Pontic steppe the Goths quickly adopted several nomadic customs from the Sarmatians. They excelled at horsemanship, archery and falconry, and were also accomplished agriculturalists and seafarers. J. B. Bury describes the Gothic period as "the only non-nomadic episode in the history of the steppe." William H. McNeill compares the migration of the Goths to that of the early Mongols, who migrated southward from the forests and came to dominate the eastern Eurasian steppe around the same time as the Goths in the west. From the 240s at the earliest, Goths were heavily recruited into the Roman Army to fight in the Roman–Persian Wars, notably participating at the Battle of Misiche in 244. An inscription at the Ka'ba-ye Zartosht in Parthian, Persian and Greek commemorates the Persian victory over the Romans and the troops drawn from "Gwt W Grmany xštr", the Gothic and German kingdoms, which is probably a Parthian gloss for the Danubian (Gothic) "limes" and the Germanic "limes". Meanwhile, Gothic raids on the Roman Empire continued, In 250–251, the Gothic king Cniva captured the city of Philippopolis and inflicted a devastating defeat upon the Romans at the Battle of Abrittus, in which the Roman Emperor Decius was killed. This was one of the most disastrious defeats in the history of the Roman army. The first Gothic seaborne raids took place in the 250s. The first two incursions into Asia Minor took place between 253 and 256, and are attributed to Boranoi by Zosimus. This may not be an ethnic term but may just mean "people from the north". It is unknown if Goths were involved in these first raids. Gregory Thaumaturgus attributes a third attack to Goths and Boradoi, and claims that some, "forgetting that they were men of Pontus and Christians," joined the invaders. An unsuccessful attack on Pityus was followed in the second year by another, which sacked Pityus and Trabzon and ravaged large areas in the Pontus. In the third year, a much larger force devastated large areas of Bithynia and the Propontis, including the cities of Chalcedon, Nicomedia, Nicaea, Apamea Myrlea, Cius and Bursa. By the end of the raids, the Goths had seized control over Crimea and the Bosporus and captured several cities on the Euxine coast, including Olbia and Tyras, which enabled them to engage in widespread naval activities. After a 10-year hiatus, the Goths and the Heruli, with a raiding fleet of 500 ships, sacked Heraclea Pontica, Cyzicus and Byzantium. They were defeated by the Roman navy but managed to escape into the Aegean Sea, where they ravaged the islands of Lemnos and Scyros, broke through Thermopylae and sacked several cities of southern Greece (province of Achaea) including Athens, Corinth, Argos, Olympia and Sparta. Then an Athenian militia, led by the historian Dexippus, pushed the invaders to the north where they were intercepted by the Roman army under Gallienus. He won an important victory near the Nessos (Nestos) river, on the boundary between Macedonia and Thrace, the Dalmatian cavalry of the Roman army earning a reputation as good fighters. Reported barbarian casualties were 3,000 men. Subsequently, the Heruli leader Naulobatus came to terms with the Romans. After Gallienus was assassinated outside Milan in the summer of 268 in a plot led by high officers in his army, Claudius was proclaimed emperor and headed to Rome to establish his rule. Claudius' immediate concerns were with the Alamanni, who had invaded Raetia and Italy. After he defeated them in the Battle of Lake Benacus, he was finally able to take care of the invasions in the Balkan provinces. In the meantime, a second and larger sea-borne invasion had started. An enormous coalition consisting of Goths (Greuthungi and Thervingi), Gepids and Peucini, led again by the Heruli, assembled at the mouth of river Tyras (Dniester). The "Augustan History" and Zosimus claim a total number of 2,000–6,000 ships and 325,000 men. This is probably a gross exaggeration but remains indicative of the scale of the invasion. After failing to storm some towns on the coasts of the western Black Sea and the Danube (Tomi, Marcianopolis), the invaders attacked Byzantium and Chrysopolis. Part of their fleet was wrecked, either because of the Goth's inexperience in sailing through the violent currents of the Propontis or because they were defeated by the Roman navy. Then they entered the Aegean Sea and a detachment ravaged the Aegean islands as far as Crete, Rhodes and Cyprus. The fleet probably also sacked Troy and Ephesus, destroying the Temple of Artemis, one of the Seven Wonders of the Ancient World. While their main force had constructed siege works and was close to taking the cities of Thessalonica and Cassandreia, it retreated to the Balkan interior at the news that the emperor was advancing. Learning of the approach of Claudius, the Goths first attempted to directly invade Italy. They were engaged near Naissus by a Roman army led by Claudius advancing from the north. The battle most likely took place in 269, and was fiercely contested. Large numbers on both sides were killed but, at the critical point, the Romans tricked the Goths into an ambush by pretending to retreat. Some 50,000 Goths were allegedly killed or taken captive and their base at Thessalonika destroyed. Apparently Aurelian, who was in charge of all Roman cavalry during Claudius' reign, led the decisive attack in the battle. Some survivors were resettled within the empire, while others were incorporated into the Roman army. The battle ensured the survival of the Roman Empire for another two centuries. In 270, after the death of Claudius, Goths under the leadership of Cannabaudes again launched an invasion of the Roman Empire, but were defeated by Aurelian, who, however, did surrender Dacia beyond the Danube. Around 275 the Goths launched a last major assault on Asia Minor, where piracy by Black Sea Goths was causing great trouble in Colchis, Pontus, Cappadocia, Galatia and even Cilicia. They were defeated sometime in 276 by Emperor Marcus Claudius Tacitus. By the late 3rd century, there were at least two groups of Goths, separated by the Dniester River: the Thervingi and the Greuthungi. The Gepids, who lived northwest of the Goths, are also attested as this time. Jordanes writes that the Gepids shared common origins with the Goths. In the late 3rd century, as recorded by Jordanes, the Gepids, under their king Fastida, utterly defeated the Burgundians, and then attacked the Goths and their king Ostrogotha. Out of this conflict, Ostrogotha and the Goths emerged victorious. In the last decades of the 3rd century, large numbers of Capri are recorded as fleeing Dacia for the Roman Empire, having probably been driven from the area by Goths. In 332, Constantine helped the Sarmatians to settle on the north banks of the Danube to defend against the Goths' attacks and thereby enforce the Roman border. Around 100,000 Goths were reportedly killed in battle, and Aoric, son of the Thervingian king Ariaric, was captured. Eusebius, an historian who wrote in Greek in the third century, wrote that in 334, Constantine evacuated approximately 300,000 Sarmatians from the north bank of the Danube after a revolt of the Sarmatians' slaves. From 335 to 336, Constantine, continuing his Danube campaign, defeated many Gothic tribes. Having been driven from the Danube by the Romans, the Thervingi invaded the territory of the Sarmatians of the Tisza. In this conflict, the Thervingi were led by Vidigoia, "the bravest of the Goths" and were victorious, although Vidigoia was killed. Jordanes states that Aoric was succeeded by Geberic, "a man renowned for his valor and noble birth", who waged war on the Hasdingi Vandals and their king Visimar, forcing them to settle in Pannonia under Roman protection. Both the Greuthungi and Thervingi became heavily Romanized during the 4th century. This came about through trade with the Romans, as well as through Gothic membership of a military covenant, which was based in Byzantium and involved pledges of military assistance. Reportedly, 40,000 Goths were brought by Constantine to defend Constantinople in his later reign, and the Palace Guard was thereafter mostly composed of Germanic warriors, as Roman soldiers by this time had largely lost military value. The Goths increasingly became soldiers in the Roman armies in the 4th century  leading to a significant Germanization of the Roman Army. Without the recruitment of Germanic warriors in the Roman Army, the Roman Empire would not have survived for as long as it did. Goths who gained prominent positions in the Roman military include Gainas, Tribigild, Fravitta and Aspar. Mardonius, a Gothic eunuch, was the childhood tutor and later adviser of Roman emperor Julian, on whom he had an immense influence. The Gothic penchant for wearing skins became fashionable in Constantinople, a fashion which was loudly denounced by conservatives. The 4th century Greek historian Eunapius described the Goths' characteristic powerful musculature in a pejorative way: "Their bodies provoked contempt in all who saw them, for they were far too big and far too heavy for their feet to carry them, and they were pinched in at the waist – just like those insects Aristotle writes of." The 4th century Greek bishop Synesius compared the Goths to wolves among sheep, mocked them for wearing skins and questioned their loyalty towards Rome: A man in skins leading warriors who wear the chlamys, exchanging his sheepskins for the toga to debate with Roman magistrates and perhaps even sit next to a Roman consul, while law-abiding men sit behind. Then these same men, once they have gone a little way from the senate house, put on their sheepskins again, and when they have rejoined their fellows they mock the toga, saying that they cannot comfortably draw their swords in it. In the 4th century, Geberic was succeeded by the Greuthungian king Ermanaric, who embarked on a large-scale expansion. Jordanes states that Ermanaric conquered a large number of warlike tribes, including the Heruli (who were led by Alaric), the Aesti and the Vistula Veneti, who, although militarily weak, were very numerous, and put up a strong resistance. Jordanes compares the conquests of Ermanaric to those of Alexander the Great, and states that he "ruled all the nations of Scythia and Germany by his own prowess alone." Interpreting Jordanes, Herwig Wolfram estimates that Ermanaric dominated a vast area of the Pontic Steppe stretching from the Baltic Sea to the Black Sea as far eastwards as the Ural Mountains, encompassing not only the Greuthungi, but also Finnic peoples, Slavs (such as the Antes), Rosomoni (Roxolani), Alans, Huns, Sarmatians and probably Aestii (Balts). According to Wolfram, it is certainly possible that the sphere of influence of the Chernyakhov culture could have extended well beyond its archaeological extent. Chernyakhov archaeological finds have been found far to the north in the forest steppe, suggesting Gothic domination of this area. Peter Heather on the other hand, contends that the extent of Ermanaric's power is exaggerated. Ermanaric's possible dominance of the Volga-Don trade routes has led historian to consider his realm a forerunner of the Viking-founded state of Kievan Rus'. In the western part of Gothic territories, dominated by the Thervingi, there were also populations of Taifali, Sarmatians and other Iranian peoples, Dacians, Daco-Romans and other Romanized populations. According to Hervarar saga ok Heiðreks (The Saga of Hervör and Heidrek), a 13th-century legendary saga, Árheimar was the capital of Reidgotaland, the land of the Goths. The saga states that it was located on the Dnieper river. Jordanes refers to the region as Oium. In the 360s, Athanaric, son of Aoric and leader of the Thervingi, supported the usurper Procopius against the Eastern Roman Emperor Valens. In retaliation, Valens invaded the territories of Athanaric and defeated him, but was unable to achieve a decisive victory. Athanaric and Valens thereupon negotiated a peace treaty, favorable to the Thervingi, on a boat in the Danube river, as Athanaric refused to set his feet within the Roman Empire. Soon afterwards, Fritigern, a rival of Athanaric, converted to Arianism, gaining the favor of Valens. Athanaric and Fritigern thereafter fought a civil war in which Athanaric appears to have been victorious. Athanaric thereafter carried out a crackdown on Christianity in his realm. Around 375 the Huns overran the Alans, an Iranian people living to the east of the Goths, and then, along with Alans, invaded the territory of the Goths themselves. A source for this period is the Roman historian Ammianus Marcellinus, who wrote that Hunnic domination of the Gothic kingdoms in Scythia began in the 370s. It is possible that the Hunnic attack came as a response to the Gothic expansion eastwards. Upon the suicide of Ermanaric, the Greuthungi gradually fell under Hunnic domination. Christopher I. Beckwith suggests that the Hunnic thrust into Europe and the Roman Empire was an attempt to subdue the independent Goths in the west. The Huns fell upon the Thervingi, and Athanaric sought refuge in the mountains (referred to as Caucaland in the sagas). Ambrose makes a passing reference to Athanaric's royal titles before 376 in his "De Spiritu Sancto" (On the Holy Spirit). Battles between the Goths and the Huns are described in the Hlöðskviða (The Battle of the Goths and Huns), a medieval Icelandic saga. The sagas recall that Gizur, king of the Geats, came to the aid of the Goths in an epic conflict with the Huns, although this saga might derive from a later Gothic-Hunnic conflict. Although the Huns successfully subdued many of the Goths who subsequently joined their ranks, Fritigern approached the Eastern Roman emperor Valens in 376 with a portion of his people and asked to be allowed to settle on the south bank of the Danube. Valens permitted this, and even assisted the Goths in their crossing of the river (probably at the fortress of Durostorum). The Gothic evacuation across the Danube was probably not spontaneous, but rather a carefully planned operation initiated after long debate among leading members of the community. Upon arrival, the Goths were to be disarmed according to their agreement with the Romans, although many of them still managed to keep their arms. The Moesogoths settled in Thrace and Moesia. Mistreated by corrupt local Roman officials, the Gothic refugees were soon experiencing a famine; some are recorded as having being forced to sell their children to Roman slave traders in return for rotten dog meat. Enraged by this treachery, Fritigern unleashed a widescale rebellion in Thrace, in which he was joined not only by Gothic refugees and slaves, but also by disgruntled Roman workers and peasants, and Gothic deserters from the Roman Army. The ensuing conflict, known as the Gothic War, lasted for several years. Meanwhile, a group of Greuthungi, led by the chieftains Alatheus and Saphrax, who were co-regents with Vithericus, son and heir of the Greuthungi king Vithimiris, crossed the Danube without Roman permission. The Gothic War culminated in the Battle of Adrianople in 378, in which the Romans were badly defeated and Valens was killed. Following the decisive Gothic victory at Adrianople, Julius, the magister militum of the Eastern Roman Empire, organized a wholesale massacre of Goths in Asia Minor, Syria and other parts of the Roman East. Fearing rebellion, Julian lured the Goths into the confines of urban streets from which they could not escape and massacred soldiers and civilians alike. As word spread, the Goths rioted throughout the region, and large numbers were killed. Survivors may have settled in Phrygia. With the rise of Theodosius I in 379, the Romans launched a renewed offensive to subdue Fritigern and his followers. Around the same time, Athanaric arrived in Constantinople, having fled Caucaland through the scheming of Fritigern. Athanaric received a warm reception by Theodosius, praised the Roman Emperor in return, and was honored with a magnificent funeral by the emperor following his death shortly after his arrival. In 382, Theodosius decided to enter peace negotiations with the Thervingi, which were concluded on 3 October 382. The Thervingi were subsequently made foederati of the Romans in Thrace and obliged to provide troops to the Roman army. In the aftermath of the Hunnic onslaught, two major groups of the Goths would eventually emerge, the Visigoths and Ostrogoths. Visigoths means the "good" or "noble" Goths, while Ostrogoths means "Goths of the rising sun" or "East Goths". The Visigoths, led by the Balti dynasty, claimed descent from the Thervingi and lived as foederati inside Roman territory, while the Ostrogoths, led by the Amali dynasty, claimed descent from the Greuthungi and were subjects of the Huns. Procopius interpreted the name "Visigoth" as "western Goths" and the name "Ostrogoth" as "eastern Goth", reflecting the geographic distribution of the Gothic realms at that time. A people closely related to the Goths, the Gepids, were also living under Hunnic domination. A smaller group of Goths were the Crimean Goths, who remained in Crimea and maintained their Gothic identity well into the Middle Ages. The Visigoths were a new Gothic political unit brought together during the career of their first leader, Alaric I. Following a major settlement of Goths in the Balkans made by Theodosius in 382, Goths received prominent positions in the Roman army. Relations with Roman civilians were sometimes uneasy. In 391, Gothic soldiers, with the blessing of Theodosius I, massacred thousands of Roman spectators at the Hippodrome in Thessalonica as vengeance for the lynching of the Gothic general Butheric. The Goths suffered heavy losses while serving Theodosius in the civil war of 394 against Eugenius and Arbogast. In 395, following the death of Theodosius I, Alaric and his Balkan Goths invaded Greece, where they sacked Piraeus (the port of Athens) and destroyed Corinth, Megara, Argos, and Sparta. Athens itself was spared by paying a large bribe, and the Eastern emperor Flavius Arcadius subsequently appointed Alaric magister militum ("master of the soldiers") in Illyricum in 397. In 401 and 402, Alaric made two attempts at invading Italy, but was defeated by Stilicho. In 405–406, another Gothic leader, Radagaisus, also attempted to invade Italy, and was also defeated by Stilicho. In 408, the Western Roman emperor Flavius Honorius ordered the execution of Stilicho and his family, then incited the Roman population to massacre tens of thousands of wives and children of Goths serving in the Roman military. Subsequently, around 30,000 Gothic soldiers defected to Alaric. Alaric in turn invaded Italy, seeking to pressure Honorious into granting him permission to settle his people in North Africa. In Italy, Alaric liberated tens of thousands of Gothic slaves, and in 410 he sacked the city of Rome. Although the city's riches were plundered, the civilian inhabitants of the city were treated humanely, and only a few buildings were burned. Alaric died soon afterwards, and was buried along with his treasure in an unknown grave under the Busento river. Alaric was succeeded by his brother-in-law Athaulf, husband of Honorius' sister Galla Placidia, who had been seized during Alaric's sack of Rome. Athaulf settled the Visigoths in southern Gaul. After failing to gain recognition from the Romans, Athaulf retreated into Hispania in early 415, and was assassinated in Barcelona shortly afterwards. He was succeeded by Sigeric and then Wallia, who succeeded in having the Visigoths accepted by Honorius as foederati in southern Gaul, with their capital at Toulouse. Wallia subsequently inflicted severe defeats upon the Silingi Vandals and the Alans in Hispania. Periodically they marched on Arles, the seat of the praetorian prefect but were always pushed back. In 437 the Visigoths signed a treaty with the Romans which they kept. Under Theodoric I the Visigoths allied with the Romans and fought Attila to a stalemate in the Battle of the Catalaunian Fields, although Theodoric was killed in the battle. Under Euric, the Visigoths established an independent Visigothic Kingdom and succeeded in driving the Suebi out of Hispania proper and back into Galicia. Although they controlled Spain, they still formed a tiny minority among a much larger Hispano-Roman population, approximately 200,000 out of 6,000,000. In 507, the Visigoths were pushed out of most of Gaul by the Frankish king Clovis I at the Battle of Vouillé. They were able to retain Narbonensis and Provence after the timely arrival of an Ostrogoth detachment sent by Theodoric the Great. The defeat at Vouillé resulted in their penetrating further into Hispania and establishing a new capital at Toledo. Under Liuvigild in the latter part of the 6th century, the Visigoths succeeded in subduing the Suebi in Galicia and the Byzantines in the south-west, and thus achieved dominance over most of the Iberian peninsula. Liuvigild also abolished the law which prevented intermarriage between Hispano-Romans and Goths, and he remained an Arian Christian. The conversion of Reccared I to Roman Catholicism in the late 6th century prompted the assimilation of Goths with the Hispano-Romans. At the end of the 7th century, the Visigothic Kingdom began to suffer from internal troubles. Their kingdom fell and was progressively conquered by the Umayyad Caliphate from 711 after the defeat of their last king Roderic at the Battle of Guadalete. Some Visigothic nobles found refuge in the mountain areas of the Pyrenees and Cantabria. The Christians began to regain control under the leadership of the nobleman Pelagius of Asturias, who founded the Kingdom of Asturias in 718 and defeated the Muslims at the Battle of Covadonga in ca. 722, in what is taken by historians to be the beginning of the Reconquista. It was from the Asturian kingdom that modern Spain and Portugal evolved. The Visigoths were never completely Romanized; rather, they were 'Hispanicized' as they spread widely over a large territory and population. They progressively adopted a new culture, retaining little of their original culture except for practical military customs, some artistic modalities, family traditions such as heroic songs and folklore, as well as select conventions to include Germanic names still in use in present-day Spain. It is these artifacts of the original Visigothic culture that give ample evidence of its contributing foundation for the present regional culture. Portraying themselves heirs of the Visigoths, the subsequent Christian Spanish monarchs declared their responsibility for the Reconquista of Muslim Spain, which was completed with the Fall of Granada in 1492. After the Hunnic invasion, many Goths became subjects of the Huns. A section of these Goths under the leadership of the Amali dynasty came to be known as the Ostrogoths. Others sought refuge in the Roman Empire, where many of them were recruited into the Roman army. In the spring of 399, Tribigild, a Gothic leader in charge of troops in Nakoleia, rose up in rebellion and defeated the first imperial army sent against him, possibly seeking to emulate Alaric's successes in the west. Gainas, a Goth who along with Stilicho and Eutropius had deposed Rufinus in 395, was sent to suppress Tribigild's rebellion, but instead plotted to use the situation to seize power in the Eastern Roman Empire. This attempt was however thwarted by the pro-Roman Goth Fravitta, and in the aftermath, thousands of Gothic civilians were massacred in Constantinople, many being burned alive in the local Arian church where they had taken shelter. As late as the 6th century Goths were settled as "foederati" in parts of Asia Minor. Their descendants, who formed the elite "Optimatoi" regiment, still lived there in the early 8th century. While they were largely assimilated, their Gothic origin was still well-known: the chronicler Theophanes the Confessor calls them Gothograeci. The Ostrogoths fought together with the Huns at the Battle of the Catalaunian Plains in 451. Following the death of Attila and the defeat of the Huns at the Battle of Nedao in 454, the Ostrogoths broke away from Hunnic rule under their king Valamir. Under his successor, Theodemir, they utterly defeated the Huns at the Bassianae in 468, and then defeated a coalition of Roman-supported Germanic tribes at the Battle of Bolia in 469, which gained them supremacy in Pannonia. Theodemir was succeeded by his son Theodoric in 471, who was forced to compete with Theodoric Strabo, leader of the Thracian Goths, for the leadership of his people. Fearing the threat posed by Theodoric to Constantinople, the Eastern Roman emperor Zeno ordered Theodoric to invade Italy in 488. By 493, Theodoric had conquered all of Italy from the Scirian Odoacer, whom he killed with his own hands; he subsequently formed the Ostrogothic Kingdom. Theodoric settled his entire people in Italy, estimated at 100,000-200,000, mostly in the northern part of the country, and ruled the country very efficiently. The Goths in Italy constituted a small minority of the population in the country. Intermarriage between Goths and Romans were forbidden, and Romans were also forbidden from carrying arms. Nevertheless, the Roman majority was treated fairly. The Goths were briefly reunited under one crown in the early 6th century under Theodoric, who became regent of the Visigothic kingdom following the death of Alaric II at the Battle of Vouillé in 507. Shortly after Theodoric's death, the country was invaded by the Eastern Roman Empire in the Gothic War, which severely devastated and depopulated the Italian peninsula. The Ostrogoths made a brief resurgence under their king Totila, who was, however, killed at the Battle of Taginae in 552. After the last stand of the Ostrogothic king Teia at the Battle of Mons Lactarius in 553, Ostrogothic resistance ended, and the remaining Goths in Italy were assimilated by the Lombards, another Germanic tribe, who invaded Italy and founded the Kingdom of the Lombards in 567. Gothic tribes who remained in the lands around the Black Sea, especially in Crimea, were known as the Crimean Goths. During the late 5th and early 6th century, the Crimean Goths had to fend off hordes of Huns who were migrating back eastward after losing control of their European empire. In the 5th century, Theodoric the Great tried to recruit Crimean Goths for his campaigns in Italy, but few showed interest in joining him. They affiliated with the Eastern Orthodox Church through the Metropolitanate of Gothia, and were then closely associated with the Byzantine Empire. During the Middle Ages, the Crimean Goths were in perpetual conflict with the Khazars. John of Gothia, the metropolitan bishop of Doros, capital of the Crimean Goths, briefly expelled the Khazars from Crimea in the late 8th century, and was subsequently canonized as an Eastern Orthodox saint. In the 10th century, the lands of the Crimean Goths were once again raided by the Khazars. As a response, the leaders of the Crimean Goths made an alliance with Sviatoslav I of Kiev, who subsequently waged war upon and utterly destroyed the Khazar Khaganate. In the late Middle Ages the Crimean Goths were part of the Principality of Theodoro, which was conquered by the Ottoman Empire in the late 15th century. As late as the 18th century a small number of people in Crimea may still have spoken Crimean Gothic. The eagle-shaped fibula, part of the Domagnano Treasure, was used to join clothes c. AD 500; the piece on display in the Germanisches Nationalmuseum in Nuremberg is well-known. In Spain an important collection of Visigothic metalwork was found in the treasure of Guarrazar, Guadamur, Province of Toledo, Castile-La Mancha, an archeological find composed of twenty-six votive crowns and gold crosses from the royal workshop in Toledo, whith Byzantine influence. The treasure represents the high point of Visigothic goldsmithery, according to Guerra and Galligaro. . The two most important votive crowns are those of Recceswinth and of Suintila, displayed in the National Archaeological Museum of Madrid; both are made of gold, encrusted with sapphires, pearls, and other precious stones. Suintila's crown was stolen in 1921 and never recovered. There are several other small crowns and many votive crosses in the treasure. These findings, along with others from some neighbouring sites and with the archaeological excavation of the Spanish Ministry of Public Works and the Royal Spanish Academy of History (April 1859), formed a group consisting of: The aquiliform (eagle-shaped) fibulae that have been discovered in necropolises such as Duraton, Madrona or Castiltierra (cities of Segovia), are an unmistakable example of the Visigothic presence in Spain. These fibulae were used individually or in pairs, as clasps or pins in gold, bronze and glass to join clothes, showing the work of the goldsmiths of Visigothic Hispania. The Visigothic belt buckles, a symbol of rank and status characteristic of Visigothic women's clothing, are also notable as works of goldsmithery. Some pieces contain exceptional Byzantine-style lapis lazuli inlays and are generally rectangular in shape, with copper alloy, garnets and glass. The Mausoleum of Theodoric (Italian: "Mausoleo di Teodorico") is an ancient monument just outside Ravenna, Italy. It was built in 520 AD by Theodoric the Great, an Ostrogoth, as his future tomb. The current structure of the mausoleum is divided into two decagonal orders, one above the other; both are made of Istria stone. Its roof is a single 230 tonne Istrian stone, 10 meters in diameter. A niche leads down to a room that was probably a chapel for funeral liturgies; a stair leads to the upper floor. Located in the centre of the floor is a circular porphyry stone grave, in which Theodoric was buried. His remains were removed during Byzantine rule, when the mausoleum was turned into a Christian oratory. In the late 19th century, silting from a nearby rivulet that had partly submerged the mausoleum was drained and excavated. The Palace of Theodoric, also in Ravenna, has a symmetrical composition with arches and monolithic marble columns, reused from previous Roman buildings. With capitals of different shapes and sizes. The Ostrogoths restored Roman buildings, some of which have come down to us thanks to them. During their governance of Hispania, the Visigoths built several churches of floor plan basilical or cruciform structure that survive, including the churches of San Pedro de la Nave in El Campillo, Santa María de Melque in San Martín de Montalbán, Santa Lucía del Trampal in Alcuéscar, Santa Comba in Bande, and Santa María de Lara in Quintanilla de las Viñas; the Visigothic crypt (the Crypt of San Antolín) in the Palencia Cathedral is a Visigothic chapel from the mid 7th century, built during the reign of Wamba to preserve the remains of the martyr Saint Antoninus of Pamiers, a Visigothic-Gallic nobleman brought from Narbonne to Visigothic Hispania in 672 or 673 by Wamba himself. These are the only remains of the Visigothic cathedral of Palencia. Reccopolis (Spanish: "Recópolis"), located near the tiny modern village of Zorita de los Canes in the province of Guadalajara, Castile-La Mancha, Spain, is an archaeological site of one of at least four cities founded in Hispania by the Visigoths. It is the only city in Western Europe to have been founded between the fifth and eighth centuries. According to Lauro Olmo Enciso who is a professor of archaeology at the University of Alcalá , the city was ordered to build by the Visigothic king Leovigild to honor his son Reccared I and to serve as Reccared's seat as co-king in the Visigothic province of Celtiberia, to the west of Carpetania, where the main capital, Toledo, lay. In ancient sources, the Goths are always described as tall and athletic, with light skin, blonde hair and blue eyes. Their physical size became a source of contempt among the Romans. Procopius notes that the Vandals and Gepids looked similar to the Goths, and on this basis, he suggested that they were all of common origin. Of the Goths, he wrote that "they all have white bodies and fair hair, and are tall and handsome to look upon." Before the invasion of the Huns, the Gothic Chernyakhov culture produced jewelry, vessels, and decorative objects in a style much influenced by Greek and Roman craftsmen. They developed a polychrome style of gold work, using wrought cells or setting to encrust gemstones into their gold objects. The Gothic language is the Germanic language with the earliest attestation (the 300s), making it a language of great interest in comparative linguistics. All other East Germanic languages are known, if at all, from proper names or short phrases that survived in historical accounts, and from loan-words in other languages. Gothic is known primarily from the Codex Argenteus, a translation of the Bible. The language was in decline by the mid-500s, due to the military victory of the Franks, the elimination of the Goths in Italy, and geographic isolation. In Spain the language lost its last and probably already declining function as a church language when the Visigoths converted to Catholicism in 589. The language survived as a domestic language in the Iberian peninsula (modern Spain and Portugal) as late as the 8th century, and Frankish author Walafrid Strabo wrote that it was still spoken in the lower Danube area in the early 9th century. The language survived in what is now northern Bulgaria until the 9th century, and a related dialect known as Crimean Gothic was spoken in the Crimea until the 16th century, according to references in the writings of travelers. Most modern scholars believe that Crimean Gothic did not derive from the dialect that was the basis for Bishop Ulfilas' translation of the Bible. Archaeological evidence in Visigothic cemeteries shows that social stratification was analogous to that of the village of Sabbas the Goth. The majority of villagers were common peasants. Paupers were buried with funeral rites, unlike slaves. In a village of 50 to 100 people, there were four or five elite couples. In Eastern Europe, houses include sunken-floored dwellings, surface dwellings, and stall-houses. The largest known settlement is the Criuleni District. Chernyakhov cemeteries feature both cremation and inhumation burials; among the latter the head aligned to the north. Some graves were left empty. Grave goods often include pottery, bone combs, and iron tools, but hardly ever weapons. Peter Heather suggests that the freemen constituted the core of Gothic society. These were ranked below the nobility, but above the freedmen and slaves. It is estimated that around a quarter to a fifth of weapon-bearing Gothic males of the Ostrogothic Kingdom were freemen. Archaeology shows that the Visigoths, unlike the Ostrogoths, were predominantly farmers. They sowed wheat, barley, rye, and flax. They also raised pigs, poultry, and goats. Horses and donkeys were raised as working animals and fed with hay. Sheep were raised for their wool, which they fashioned into clothing. Archaeology indicates they were skilled potters and blacksmiths. When peace treaties were negotiated with the Romans, the Goths demanded free trade. Imports from Rome included wine and cooking-oil. Roman writers note that the Goths neither assessed taxes on their own people nor on their subjects. The early 5th century Christian writer Salvian compared the Goths' and related people's favourable treatment of the poor to the miserable state of peasants in Roman Gaul: For in the Gothic country the barbarians are so far from tolerating this sort of oppression that not even Romans who live among them have to bear it. Hence all the Romans in that region have but one desire, that they may never have to return to the Roman jurisdiction. It is the unanimous prayer of the Roman people in that district that they may be permitted to continue to lead their present life among the barbarians. Initially practising Gothic paganism, the Goths were gradually converted to Arianism in the course of the 4th century. According to Basil of Caesarea, a prisoner named Eutychus taken captive in a raid on Cappadocia in 260 preached the gospel to the Goths and was martyred. It was only in the 4th as a result of missionary activity by the Gothic bishop Ulfilas, whose grandparents were Cappadocians taken captive in the raids of the 250s, that the Goths were gradually converted. Ulfilas devised a Gothic alphabet and translated the Gothic Bible. During the 370s, Goths converting to Christianity were subject to persecution by the Thervingian king Athanaric, who was a pagan. The Visigothic Kingdom in Hispania converted to Catholicism in the late 6th Century. The Ostrogoths (and their remnants, the Crimean Goths) were closely connected to the Patriarchate of Constantinople from the 5th century, and became fully incorporated under the Metropolitanate of Gothia from the 9th century. The Goths' relationship with Sweden became an important part of Swedish nationalism, and until the 19th Century, before the Gothic origin had been thoroughly researched by archaeologists, Swedish scholars considered Swedes to be the direct descendants of the Goths. Today, scholars identify this as a cultural movement called Gothicismus, which included an enthusiasm for things Old Norse. In medieval and modern Spain, the Visigoths were believed to be the progenitors of the Spanish nobility (compare Gobineau for a similar French idea). By the early 7th century, the ethnic distinction between Visigoths and Hispano-Romans had all but disappeared, but recognition of a Gothic origin, e.g. on gravestones, still survived among the nobility. The 7th century Visigothic aristocracy saw itself as bearers of a particular Gothic consciousness and as guardians of old traditions such as Germanic namegiving; probably these traditions were on the whole restricted to the family sphere (Hispano-Roman nobles were doing service for Visigothic nobles already in the 5th century and the two branches of Spanish aristocracy had fully adopted similar customs two centuries later). Beginning in 1278, when Magnus III of Sweden ascended to the throne, a reference to Gothic origins was included in the title of the King of Sweden: In 1973, with the accession of Queen Margrethe II, the title was changed to simply "King [or Queen] of Sweden." The Spanish and Swedish claims of Gothic origins led to a clash at the Council of Basel in 1434. Before the assembled cardinals and delegations could engage in theological discussion, they had to decide how to sit during the proceedings. The delegations from the more prominent nations argued that they should sit closest to the Pope, and there were also disputes over who were to have the finest chairs and who were to have their chairs on mats. In some cases, they compromised so that some would have half a chair leg on the rim of a mat. In this conflict, Nicolaus Ragvaldi, bishop of the Diocese of Växjö, claimed that the Swedes were the descendants of the great Goths, and that the people of Västergötland ("Westrogothia" in Latin) were the Visigoths and the people of Östergötland ("Ostrogothia" in Latin) were the Ostrogoths. The Spanish delegation retorted that it was only the "lazy" and "unenterprising" Goths who had remained in Sweden, whereas the "heroic" Goths had left Sweden, invaded the Roman empire and settled in Spain. In Spain, a man acting with arrogance would be said to be ""haciéndose los godos"" ("making himself to act like the Goths"). In Chile, Argentina and the Canary Islands, "godo" was an ethnic slur used against European Spaniards, who in the early colonial period often felt superior to the people born locally ("criollos"). A large amount of literature has been produced on the Goths, with Henry Bradley's "The Goths" (1888) being the standard English-language text for many decades. More recently, Peter Heather has established himself that the leading authority on the Goths in the English-speaking world. The leading authority on the Goths in the German-speaking world is Herwig Wolfram.
https://en.wikipedia.org/wiki?curid=12641
Glycolysis Glycolysis (from "glycose", an older term for glucose + "-lysis" degradation) is the metabolic pathway that converts glucose C6H12O6, into pyruvate, CH3COCOO− (pyruvic acid), and a hydrogen ion, H+. The free energy released in this process is used to form the high-energy molecules ATP (adenosine triphosphate) and NADH (reduced nicotinamide adenine dinucleotide). Glycolysis is a sequence of ten enzyme-catalyzed reactions. Most monosaccharides, such as fructose and galactose, can be converted to one of these intermediates. The intermediates may also be directly useful rather than just utilized as steps in the overall reaction. For example, the intermediate dihydroxyacetone phosphate (DHAP) is a source of the glycerol that combines with fatty acids to form fat. Glycolysis is an oxygen-independent metabolic pathway. The wide occurrence of glycolysis indicates that it is an ancient metabolic pathway. Indeed, the reactions that constitute glycolysis and its parallel pathway, the pentose phosphate pathway, occur metal-catalyzed under the oxygen-free conditions of the Archean oceans, also in the absence of enzymes. In most organisms, glycolysis occurs in the cytosol. The most common type of glycolysis is the "Embden–Meyerhof–Parnas (EMP pathway)", which was discovered by Gustav Embden, Otto Meyerhof, and Jakub Karol Parnas. Glycolysis also refers to other pathways, such as the "Entner–Doudoroff pathway" and various heterofermentative and homofermentative pathways. However, the discussion here will be limited to the Embden–Meyerhof–Parnas pathway. The glycolysis pathway can be separated into two phases: The overall reaction of glycolysis is: The use of symbols in this equation makes it appear unbalanced with respect to oxygen atoms, hydrogen atoms, and charges. Atom balance is maintained by the two phosphate (Pi) groups: Charges are balanced by the difference between ADP and ATP. In the cellular environment, all three hydroxyl groups of ADP dissociate into −O− and H+, giving ADP3−, and this ion tends to exist in an ionic bond with Mg2+, giving ADPMg−. ATP behaves identically except that it has four hydroxyl groups, giving ATPMg2−. When these differences along with the true charges on the two phosphate groups are considered together, the net charges of −4 on each side are balanced. For simple fermentations, the metabolism of one molecule of glucose to two molecules of pyruvate has a net yield of two molecules of ATP. Most cells will then carry out further reactions to "repay" the used NAD+ and produce a final product of ethanol or lactic acid. Many bacteria use inorganic compounds as hydrogen acceptors to regenerate the NAD+. Cells performing aerobic respiration synthesize much more ATP, but not as part of glycolysis. These further aerobic reactions use pyruvate, and NADH + H+ from glycolysis. Eukaryotic aerobic respiration produces approximately 34 additional molecules of ATP for each glucose molecule, however most of these are produced by a mechanism vastly different than the substrate-level phosphorylation in glycolysis. The lower-energy production, per glucose, of anaerobic respiration relative to aerobic respiration, results in greater flux through the pathway under hypoxic (low-oxygen) conditions, unless alternative sources of anaerobically oxidizable substrates, such as fatty acids, are found. The pathway of glycolysis as it is known today took almost 100 years to fully discover. The combined results of many smaller experiments were required in order to understand the pathway as a whole. The first steps in understanding glycolysis began in the nineteenth century with the wine industry. For economic reasons, the French wine industry sought to investigate why wine sometimes turned distasteful, instead of fermenting into alcohol. French scientist Louis Pasteur researched this issue during the 1850s, and the results of his experiments began the long road to elucidating the pathway of glycolysis. His experiments showed that fermentation occurs by the action of living microorganisms; and that yeast's glucose consumption decreased under aerobic conditions of fermentation, in comparison to anaerobic conditions (the Pasteur effect). Insight into the component steps of glycolysis were provided by the non-cellular fermentation experiments of Eduard Buchner during the 1890s. Buchner demonstrated that the conversion of glucose to ethanol was possible using a non-living extract of yeast (due to the action of enzymes in the extract). This experiment not only revolutionized biochemistry, but also allowed later scientists to analyze this pathway in a more controlled lab setting. In a series of experiments (1905-1911), scientists Arthur Harden and William Young discovered more pieces of glycolysis. They discovered the regulatory effects of ATP on glucose consumption during alcohol fermentation. They also shed light on the role of one compound as a glycolysis intermediate: fructose 1,6-bisphosphate. The elucidation of fructose 1,6-bisphosphate was accomplished by measuring CO2 levels when yeast juice was incubated with glucose. CO2 production increased rapidly then slowed down. Harden and Young noted that this process would restart if an inorganic phosphate (Pi) was added to the mixture. Harden and Young deduced that this process produced organic phosphate esters, and further experiments allowed them to extract fructose diphosphate (F-1,6-DP). Arthur Harden and William Young along with Nick Sheppard determined, in a second experiment, that a heat-sensitive high-molecular-weight subcellular fraction (the enzymes) and a heat-insensitive low-molecular-weight cytoplasm fraction (ADP, ATP and NAD+ and other cofactors) are required together for fermentation to proceed. This experiment begun by observing that dialyzed (purified) yeast juice could not ferment or even create a sugar phosphate. This mixture was rescued with the addition of undialyzed yeast extract that had been boiled. Boiling the yeast extract renders all proteins inactive (as it denatures them). The ability of boiled extract plus dialyzed juice to complete fermentation suggests that the cofactors were non-protein in character. In the 1920s Otto Meyerhof was able to link together some of the many individual pieces of glycolysis discovered by Buchner, Harden, and Young. Meyerhof and his team were able to extract different glycolytic enzymes from muscle tissue, and combine them to artificially create the pathway from glycogen to lactic acid. In one paper, Meyerhof and scientist Renate Junowicz-Kockolaty investigated the reaction that splits fructose 1,6-diphosphate into the two triose phosphates. Previous work proposed that the split occurred via 1,3-diphosphoglyceraldehyde plus an oxidizing enzyme and cozymase. Meyerhoff and Junowicz found that the equilibrium constant for the isomerase and aldoses reaction were not affected by inorganic phosphates or any other cozymase or oxidizing enzymes. They further removed diphosphoglyceraldehyde as a possible intermediate in glycolysis. With all of these pieces available by the 1930s, Gustav Embden proposed a detailed, step-by-step outline of that pathway we now know as glycolysis. The biggest difficulties in determining the intricacies of the pathway were due to the very short lifetime and low steady-state concentrations of the intermediates of the fast glycolytic reactions. By the 1940s, Meyerhof, Embden and many other biochemists had finally completed the puzzle of glycolysis. The understanding of the isolated pathway has been expanded in the subsequent decades, to include further details of its regulation and integration with other metabolic pathways. The first five steps of Glycolysis are regarded as the preparatory (or investment) phase, since they consume energy to convert the glucose into two three-carbon sugar phosphates (G3P). The first step is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration low, promoting continuous transport of glucose into the cell through the plasma membrane transporters. In addition, it blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen. In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels. "Cofactors:" Mg2+ G6P is then rearranged into fructose 6-phosphate (F6P) by glucose phosphate isomerase. Fructose can also enter the glycolytic pathway by phosphorylation at this point. The change in structure is an isomerization, in which the G6P has been converted to F6P. The reaction requires an enzyme, phosphoglucose isomerase, to proceed. This reaction is freely reversible under normal cell conditions. However, it is often driven forward because of a low concentration of F6P, which is constantly consumed during the next step of glycolysis. Under conditions of high F6P concentration, this reaction readily runs in reverse. This phenomenon can be explained through Le Chatelier's Principle. Isomerization to a keto sugar is necessary for carbanion stabilization in the fourth reaction step (below). The energy expenditure of another ATP in this step is justified in 2 ways: The glycolytic process (up to this step) becomes irreversible, and the energy supplied destabilizes the molecule. Because the reaction catalyzed by phosphofructokinase 1 (PFK-1) is coupled to the hydrolysis of ATP (an energetically favorable step) it is, in essence, irreversible, and a different pathway must be used to do the reverse conversion during gluconeogenesis. This makes the reaction a key regulatory point (see below). This is also the rate-limiting step. Furthermore, the second phosphorylation event is necessary to allow the formation of two charged groups (rather than only one) in the subsequent step of glycolysis, ensuring the prevention of free diffusion of substrates out of the cell. The same reaction can also be catalyzed by pyrophosphate-dependent phosphofructokinase (PFP or PPi-PFK), which is found in most plants, some bacteria, archea, and protists, but not in animals. This enzyme uses pyrophosphate (PPi) as a phosphate donor instead of ATP. It is a reversible reaction, increasing the flexibility of glycolytic metabolism. A rarer ADP-dependent PFK enzyme variant has been identified in archaean species. "Cofactors:" Mg2+ Destabilizing the molecule in the previous reaction allows the hexose ring to be split by aldolase into two triose sugars: dihydroxyacetone phosphate (a ketose), and glyceraldehyde 3-phosphate (an aldose). There are two classes of aldolases: class I aldolases, present in animals and plants, and class II aldolases, present in fungi and bacteria; the two classes use different mechanisms in cleaving the ketose ring. Electrons delocalized in the carbon-carbon bond cleavage associate with the alcohol group. The resulting carbanion is stabilized by the structure of the carbanion itself via resonance charge distribution and by the presence of a charged ion prosthetic group. Triosephosphate isomerase rapidly interconverts dihydroxyacetone phosphate with glyceraldehyde 3-phosphate (GADP) that proceeds further into glycolysis. This is advantageous, as it directs dihydroxyacetone phosphate down the same pathway as glyceraldehyde 3-phosphate, simplifying regulation. The second half of glycolysis is known as the pay-off phase, characterised by a net gain of the energy-rich molecules ATP and NADH. Since glucose leads to two triose sugars in the preparatory phase, each reaction in the pay-off phase occurs twice per glucose molecule. This yields 2 NADH molecules and 4 ATP molecules, leading to a net gain of 2 NADH molecules and 2 ATP molecules from the glycolytic pathway per glucose. The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate. The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose. Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion (HPO42−), which dissociates to contribute the extra H+ ion and gives a net charge of -3 on both sides. Here, arsenate (AsO43−), an anion akin to inorganic phosphate may replace phosphate as a substrate to form 1-arseno-3-phosphoglycerate. This, however, is unstable and readily hydrolyzes to form 3-phosphoglycerate, the intermediate in the next step of the pathway. As a consequence of bypassing this step, the molecule of ATP generated from 1-3 bisphosphoglycerate in the next reaction will not be made, even though the reaction proceeds. As a result, arsenate is an uncoupler of glycolysis. This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate. At this step, glycolysis has reached the break-even point: 2 molecules of ATP were consumed, and 2 new molecules have now been synthesized. This step, one of the two substrate-level phosphorylation steps, requires ADP; thus, when the cell has plenty of ATP (and little ADP), this reaction does not occur. Because ATP decays relatively quickly when it is not metabolized, this is an important regulatory point in the glycolytic pathway. ADP actually exists as ADPMg−, and ATP as ATPMg2−, balancing the charges at −5 both sides. "Cofactors:" Mg2+ Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate. Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism. "Cofactors:" 2 Mg2+, one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration. A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step. "Cofactors:" Mg2+ The existence of more than one point of regulation indicates that intermediates between those points enter and leave the glycolysis pathway by other processes. For example, in the first regulated step, hexokinase converts glucose into glucose-6-phosphate. Instead of continuing through the glycolysis pathway, this intermediate can be converted into glucose storage molecules, such as glycogen or starch. The reverse reaction, breaking down, e.g., glycogen, produces mainly glucose-6-phosphate; very little free glucose is formed in the reaction. The glucose-6-phosphate so produced can enter glycolysis "after" the first control point. In the second regulated step (the third step of glycolysis), phosphofructokinase converts fructose-6-phosphate into fructose-1,6-bisphosphate, which then is converted into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. The dihydroxyacetone phosphate can be removed from glycolysis by conversion into glycerol-3-phosphate, which can be used to form triglycerides. Conversely, triglycerides can be broken down into fatty acids and glycerol; the latter, in turn, can be converted into dihydroxyacetone phosphate, which can enter glycolysis "after" the second control point. The change in free energy, Δ"G", for each step in the glycolysis pathway can be calculated using Δ"G" = Δ"G"°' + "RT"ln "Q", where "Q" is the reaction quotient. This requires knowing the concentrations of the metabolites. All of these values are available for erythrocytes, with the exception of the concentrations of NAD+ and NADH. The ratio of NAD+ to NADH in the cytoplasm is approximately 1000, which makes the oxidation of glyceraldehyde-3-phosphate (step 6) more favourable. Using the measured concentrations of each step, and the standard free energy changes, the actual free energy change can be calculated. (Neglecting this is very common - the delta G of ATP hydrolysis in cells is not the standard free energy change of ATP hydrolysis quoted in textbooks). From measuring the physiological concentrations of metabolites in an erythrocyte it seems that about seven of the steps in glycolysis are in equilibrium for that cell type. Three of the steps — the ones with large negative free energy changes — are not in equilibrium and are referred to as "irreversible"; such steps are often subject to regulation. Step 5 in the figure is shown behind the other steps, because that step is a side-reaction that can decrease or increase the concentration of the intermediate glyceraldehyde-3-phosphate. That compound is converted to dihydroxyacetone phosphate by the enzyme triose phosphate isomerase, which is a catalytically perfect enzyme; its rate is so fast that the reaction can be assumed to be in equilibrium. The fact that Δ"G" is not zero indicates that the actual concentrations in the erythrocyte are not accurately known. The four regulatory enzymes are hexokinase (or glucokinase in the liver), phosphofructokinase, and pyruvate kinase. The flux through the glycolytic pathway is adjusted in response to conditions both inside and outside the cell. The internal factors that regulate glycolysis do so primarily to provide ATP in adequate quantities for the cell’s needs. The external factors act primarily on the liver, fat tissue, and muscles, which can remove large quantities of glucose from the blood after meals (thus preventing hyperglycemia by storing the excess glucose as fat or glycogen, depending on the tissue type). The liver is also capable of releasing glucose into the blood between meals, during fasting, and exercise thus preventing hypoglycemia by means of glycogenolysis and gluconeogenesis. These latter reactions coincide with the halting of glycolysis in the liver. In animals, regulation of blood glucose levels by the pancreas in conjunction with the liver is a vital part of homeostasis. The beta cells in the pancreatic islets are sensitive to the blood glucose concentration. A rise in the blood glucose concentration causes them to release insulin into the blood, which has an effect particularly on the liver, but also on fat and muscle cells, causing these tissues to remove glucose from the blood. When the blood sugar falls the pancreatic beta cells cease insulin production, but, instead, stimulate the neighboring pancreatic alpha cells to release glucagon into the blood. This, in turn, causes the liver to release glucose into the blood by breaking down stored glycogen, and by means of gluconeogenesis. If the fall in the blood glucose level is particularly rapid or severe, other glucose sensors cause the release of epinephrine from the adrenal glands into the blood. This has the same action as glucagon on glucose metabolism, but its effect is more pronounced. In the liver glucagon and epinephrine cause the phosphorylation of the key, rate limiting enzymes of glycolysis, fatty acid synthesis, cholesterol synthesis, gluconeogenesis, and glycogenolysis. Insulin has the opposite effect on these enzymes. The phosphorylation and dephosphorylation of these enzymes (ultimately in response to the glucose level in the blood) is the dominant manner by which these pathways are controlled in the liver, fat, and muscle cells. Thus the phosphorylation of phosphofructokinase inhibits glycolysis, whereas its dephosphorylation through the action of insulin stimulates glycolysis. In addition hexokinase and glucokinase act independently of the hormonal effects as controls at the entry points of glucose into the cells of different tissues. Hexokinase responds to the glucose-6-phosphate (G6P) level in the cell, or, in the case of glucokinase, to the blood sugar level in the blood to impart entirely intracellular controls of the glycolytic pathway in different tissues (see below). When glucose has been converted into G6P by hexokinase or glucokinase, it can either be converted to glucose-1-phosphate (G1P) for conversion to glycogen, or it is alternatively converted by glycolysis to pyruvate, which enters the mitochondrion where it is converted into acetyl-CoA and then into citrate. Excess citrate is exported from the mitochondrion back into the cytosol, where ATP citrate lyase regenerates acetyl-CoA and oxaloacetate (OAA). The acetyl-CoA is then used for fatty acid synthesis and cholesterol synthesis, two important ways of utilizing excess glucose when its concentration is high in blood. The rate limiting enzymes catalyzing these reactions perform these functions when they have been dephosphorylated through the action of insulin on the liver cells. Between meals, during fasting, exercise or hypoglycemia, glucagon and epinephrine are released into the blood. This causes liver glycogen to be converted back to G6P, and then converted to glucose by the liver-specific enzyme glucose 6-phosphatase and released into the blood. Glucagon and epinephrine also stimulate gluconeogenesis, which coverts non-carbohydrate substrates into G6P, which joins the G6P derived from glycogen, or substitutes for it when the liver glycogen store have been depleted. This is critical for brain function, since the brain utilizes glucose as an energy source under most conditions. The simultaneously phosphorylation of, particularly, phosphofructokinase, but also, to a certain extent pyruvate kinase, prevents glycolysis occurring at the same time as gluconeogenesis and glycogenolysis. All cells contain the enzyme hexokinase, which catalyzes the conversion of glucose that has entered the cell into glucose-6-phosphate (G6P). Since the cell membrane is impervious to G6P, hexokinase essentially acts to transport glucose into the cells from which it can then no longer escape. Hexokinase is inhibited by high levels of G6P in the cell. Thus the rate of entry of glucose into cells partially depends on how fast G6P can be disposed of by glycolysis, and by glycogen synthesis (in the cells which store glycogen, namely liver and muscles). Glucokinase, unlike hexokinase, is not inhibited by G6P. It occurs in liver cells, and will only phosphorylate the glucose entering the cell to form glucose-6-phosphate (G6P), when the sugar in the blood is abundant. This being the first step in the glycolytic pathway in the liver, it therefore imparts an additional layer of control of the glycolytic pathway in this organ. Phosphofructokinase is an important control point in the glycolytic pathway, since it is one of the irreversible steps and has key allosteric effectors, AMP and fructose 2,6-bisphosphate (F2,6BP). Fructose 2,6-bisphosphate (F2,6BP) is a very potent activator of phosphofructokinase (PFK-1) that is synthesized when F6P is phosphorylated by a second phosphofructokinase (PFK2). In the liver, when blood sugar is low and glucagon elevates cAMP, PFK2 is phosphorylated by protein kinase A. The phosphorylation inactivates PFK2, and another domain on this protein becomes active as fructose bisphosphatase-2, which converts F2,6BP back to F6P. Both glucagon and epinephrine cause high levels of cAMP in the liver. The result of lower levels of liver fructose-2,6-bisphosphate is a decrease in activity of phosphofructokinase and an increase in activity of fructose 1,6-bisphosphatase, so that gluconeogenesis (in essence, "glycolysis in reverse") is favored. This is consistent with the role of the liver in such situations, since the response of the liver to these hormones is to release glucose to the blood. ATP competes with AMP for the allosteric effector site on the PFK enzyme. ATP concentrations in cells are much higher than those of AMP, typically 100-fold higher, but the concentration of ATP does not change more than about 10% under physiological conditions, whereas a 10% drop in ATP results in a 6-fold increase in AMP. Thus, the relevance of ATP as an allosteric effector is questionable. An increase in AMP is a consequence of a decrease in energy charge in the cell. Citrate inhibits phosphofructokinase when tested "in vitro" by enhancing the inhibitory effect of ATP. However, it is doubtful that this is a meaningful effect "in vivo", because citrate in the cytosol is utilized mainly for conversion to acetyl-CoA for fatty acid and cholesterol synthesis. TIGAR, a p53 induced enzyme, is responsible for the regulation of phosphofructokinase and acts to protect against oxidative stress. TIGAR is a single enzyme with dual function that regulates F2,6BP. It can behave as a phosphatase (fructuose-2,6-bisphosphatase) which cleaves the phosphate at carbon-2 producing F6P. It can also behave as a kinase (PFK2) adding a phosphate onto carbon-2 of F6P which produces F2,6BP. In humans, the TIGAR protein is encoded by "C12orf5" gene. The TIGAR enzyme will hinder the forward progression of glycolysis, by creating a build up of fructose-6-phosphate (F6P) which is isomerized into glucose-6-phosphate (G6P). The accumulation of G6P will shunt carbons into the pentose phosphate pathway. Pyruvate kinase enzyme catalyzes the last step of glycolysis, in which pyruvate and ATP are formed. Pyruvate kinase catalyzes the transfer of a phosphate group from phosphoenolpyruvate (PEP) to ADP, yielding one molecule of pyruvate and one molecule of ATP. Liver pyruvate kinase is indirectly regulated by epinephrine and glucagon, through protein kinase A. This protein kinase phosphorylates liver pyruvate kinase to deactivate it. Muscle pyruvate kinase is not inhibited by epinephrine activation of protein kinase A. Glucagon signals fasting (no glucose available). Thus, glycolysis is inhibited in the liver but unaffected in muscle when fasting. An increase in blood sugar leads to secretion of insulin, which activates phosphoprotein phosphatase I, leading to dephosphorylation and activation of pyruvate kinase. These controls prevent pyruvate kinase from being active at the same time as the enzymes that catalyze the reverse reaction (pyruvate carboxylase and phosphoenolpyruvate carboxykinase), preventing a futile cycle. The overall process of glycolysis is: If glycolysis were to continue indefinitely, all of the NAD+ would be used up, and glycolysis would stop. To allow glycolysis to continue, organisms must be able to oxidize NADH back to NAD+. How this is performed depends on which external electron acceptor is available. One method of doing this is to simply have the pyruvate do the oxidation; in this process, pyruvate is converted to lactate (the conjugate base of lactic acid) in a process called lactic acid fermentation: This process occurs in the bacteria involved in making yogurt (the lactic acid causes the milk to curdle). This process also occurs in animals under hypoxic (or partially anaerobic) conditions, found, for example, in overworked muscles that are starved of oxygen. In many tissues, this is a cellular last resort for energy; most animal tissue cannot tolerate anaerobic conditions for an extended period of time. Some organisms, such as yeast, convert NADH back to NAD+ in a process called ethanol fermentation. In this process, the pyruvate is converted first to acetaldehyde and carbon dioxide, and then to ethanol. Lactic acid fermentation and ethanol fermentation can occur in the absence of oxygen. This anaerobic fermentation allows many single-cell organisms to use glycolysis as their only energy source. Anoxic regeneration of NAD+ is only an effective means of energy production during short, intense exercise in vertebrates, for a period ranging from 10 seconds to 2 minutes during a maximal effort in humans. (At lower exercise intensities it can sustain muscle activity in diving animals, such as seals, whales and other aquatic vertebrates, for very much longer periods of time.) Under these conditions NAD+ is replenished by NADH donating its electrons to pyruvate to form lactate. This produces 2 ATP molecules per glucose molecule, or about 5% of glucose's energy potential (38 ATP molecules in bacteria). But the speed at which ATP is produced in this manner is about 100 times that of oxidative phosphorylation. The pH in the cytoplasm quickly drops when hydrogen ions accumulate in the muscle, eventually inhibiting the enzymes involved in glycolysis. The burning sensation in muscles during hard exercise can be attributed to the release of hydrogen ions during the shift to glucose fermentation from glucose oxidation to carbon dioxide and water, when aerobic metabolism can no longer keep pace with the energy demands of the muscles. These hydrogen ions form a part of lactic acid. The body falls back on this less efficient but faster method of producing ATP under low oxygen conditions. This is thought to have been the primary means of energy production in earlier organisms before oxygen reached high concentrations in the atmosphere between 2000 and 2500 million years ago, and thus would represent a more ancient form of energy production than the aerobic replenishment of NAD+ in cells. The liver in mammals gets rid of this excess lactate by transforming it back into pyruvate under aerobic conditions; see Cori cycle. Fermentation of pyruvate to lactate is sometimes also called "anaerobic glycolysis", however, glycolysis ends with the production of pyruvate regardless of the presence or absence of oxygen. In the above two examples of fermentation, NADH is oxidized by transferring two electrons to pyruvate. However, anaerobic bacteria use a wide variety of compounds as the terminal electron acceptors in cellular respiration: nitrogenous compounds, such as nitrates and nitrites; sulfur compounds, such as sulfates, sulfites, sulfur dioxide, and elemental sulfur; carbon dioxide; iron compounds; manganese compounds; cobalt compounds; and uranium compounds. In aerobic organisms, a complex mechanism has been developed to use the oxygen in air as the final electron acceptor. The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA can be carboxylated by acetyl-CoA carboxylase into malonyl CoA, the first committed step in the synthesis of fatty acids, or it can be combined with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) which is the rate limiting step controlling the synthesis of cholesterol. Cholesterol can be used as is, as a structural component of cellular membranes, or it can be used to synthesize the steroid hormones, bile salts, and vitamin D. Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction (from the Greek meaning to "fill up"), increasing the cycle’s capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in heart and skeletal muscle) are suddenly increased by activity. In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of oxaloacetate greatly increases the amounts of all the citric acid intermediates, thereby increasing the cycle's capacity to metabolize acetyl CoA, converting its acetate component into CO2 and water, with the release of enough energy to form 11 ATP and 1 GTP molecule for each additional molecule of acetyl CoA that combines with oxaloacetate in the cycle. To cataplerotically remove oxaloacetate from the citric cycle, malate can be transported from the mitochondrion into the cytoplasm, decreasing the amount of oxaloacetate that can be regenerated. Furthermore, citric acid intermediates are constantly used to form a variety of substances such as the purines, pyrimidines and porphyrins. This article concentrates on the catabolic role of glycolysis with regard to converting potential chemical energy to usable chemical energy during the oxidation of glucose to pyruvate. Many of the metabolites in the glycolytic pathway are also used by anabolic pathways, and, as a consequence, flux through the pathway is critical to maintain a supply of carbon skeletons for biosynthesis. The following metabolic pathways are all strongly reliant on glycolysis as a source of metabolites: and many more. Although gluconeogenesis and glycolysis share many intermediates the one is not functionally a branch or tributary of the other. There are two regulatory steps in both pathways which, when active in the one pathway, are automatically inactive in the other. The two processes can therefore not be simultaneously active. Indeed, if both sets of reactions were highly active at the same time the net result would be the hydrolysis of four high energy phosphate bonds (two ATP and two GTP) per reaction cycle. NAD+ is the oxidizing agent in glycolysis, as it is in most other energy yielding metabolic reactions (e.g. beta-oxidation of fatty acids, and during the citric acid cycle). The NADH thus produced is primarily used to ultimately transfer electrons to O2 to produce water, or, when O2 is not available, to produced compounds such as lactate or ethanol (see "Anoxic regeneration of NAD+" above). NADH is rarely used for synthetic processes, the notable exception being gluconeogenesis. During fatty acid and cholesterol synthesis the reducing agent is NADPH. This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by “NADP+-linked malic enzyme" pyruvate, CO2 and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate. Cellular uptake of glucose occurs in response to insulin signals, and glucose is subsequently broken down through glycolysis, lowering blood sugar levels. However, the low insulin levels seen in diabetes result in hyperglycemia, where glucose levels in the blood rise and glucose is not properly taken up by cells. Hepatocytes further contribute to this hyperglycemia through gluconeogenesis. Glycolysis in hepatocytes controls hepatic glucose production, and when glucose is overproduced by the liver without having a means of being broken down by the body, hyperglycemia results. Glycolytic mutations are generally rare due to importance of the metabolic pathway, this means that the majority of occurring mutations result in an inability for the cell to respire, and therefore cause the death of the cell at an early stage. However, some mutations are seen with one notable example being Pyruvate kinase deficiency, leading to chronic hemolytic anemia. Malignant tumor cells perform glycolysis at a rate that is ten times faster than their noncancerous tissue counterparts. During their genesis, limited capillary support often results in hypoxia (decreased O2 supply) within the tumor cells. Thus, these cells rely on anaerobic metabolic processes such as glycolysis for ATP (adenosine triphosphate). Some tumor cells overexpress specific glycolytic enzymes which result in higher rates of glycolysis. Often these enzymes are Isoenzymes, of traditional glycolysis enzymes, that vary in their susceptibility to traditional feedback inhibition. The increase in glycolytic activity ultimately counteracts the effects of hypoxia by generating sufficient ATP from this anaerobic pathway. This phenomenon was first described in 1930 by Otto Warburg and is referred to as the Warburg effect. The Warburg hypothesis claims that cancer is primarily caused by dysfunctionality in mitochondrial metabolism, rather than because of the uncontrolled growth of cells. A number of theories have been advanced to explain the Warburg effect. One such theory suggests that the increased glycolysis is a normal protective process of the body and that malignant change could be primarily caused by energy metabolism. This high glycolysis rate has important medical applications, as high aerobic glycolysis by malignant tumors is utilized clinically to diagnose and monitor treatment responses of cancers by imaging uptake of 2-18F-2-deoxyglucose (FDG) (a radioactive modified hexokinase substrate) with positron emission tomography (PET). There is ongoing research to affect mitochondrial metabolism and treat cancer by reducing glycolysis and thus starving cancerous cells in various new ways, including a ketogenic diet. Some of the metabolites in glycolysis have alternative names and nomenclature. In part, this is because some of them are common to other pathways, such as the Calvin cycle. The intermediates of glycolysis depicted in Fischer projections show the chemical changing step by step. Such image can be compared to polygonal model representation.
https://en.wikipedia.org/wiki?curid=12644
Guernica Guernica (, ), official name (reflecting the Basque language) Gernika (), is a town in the province of Biscay, in the Autonomous Community of the Basque Country, Spain. The town of Guernica is one part (along with neighbouring Lumo) of the municipality of Gernika-Lumo (, whose population is 16,224 . Gernika is best known -- among those residing outside the Basque region -- both as the scene of the April 26, 1937 bombing of Guernica, one of the first aerial bombings (by Nazi Germany's Luftwaffe), and as the inspiration forPablo Picasso's painting "Guernica", depicting the effects of that outrage. This village is situated at 10m altitude, in the region of Busturialdea, in the valley of the Oka river. Its mouth is known as Urdaibai's estuary's heart. Gernika borders on the following townships: The town of Guernica was founded by Count Tello on April 28, 1366, at the intersection of the road from Bermeo to Durango with the road from Bilbao to Elantxobe and Lekeitio. The strategic importance of the site was increased by the fact that it lay on a major river estuary, where vessels could dock at the port of Suso. In time, it took on the typical shape of a Basque town, comprising a series of parallel streets (Goienkale, Azokekale, Artekale and Barrenkale; respectively: ‘upper, market, between, lower roads’) and a transverse street called Santa María, with a church at each end of the built-up area. Life in the town became rigidly structured, with the aim being to preserve the privileges of the dominant middle classes. This pattern continued practically unaltered until the late 17th century. On a small hillock in the town, stands the Meeting House and the famous Tree of Gernika. By ancient tradition, Basques, and indeed other peoples in Medieval Europe, held assemblies under a tree, usually an oak, to discuss matters affecting the community. In Biscay, each administrative district (known as a merindad) had its appointed tree, but over the centuries, the Tree of Guernica acquired particular importance. It stood in the parish of Lumo, on a site known as Gernikazarra, beside a small shrine. The laws of Biscay continued to be drawn up under this tree until 1876, with each town and village in the province sending two representatives to the sessions, known as General Assemblies. This early form of democracy was recorded by the philosopher Rousseau, by the poet William Wordsworth, by the dramatist Tirso de Molina and by the composer Iparragirre, who wrote the piece called Gernikako Arbola. When the Domain of Biscay was incorporated into the kingdom of Castile, the king of Castile visited Guernica and swore an oath under the Tree promising to uphold the fueros or local laws of Biscay. The oath of King Ferdinand, known as the "Catholic Monarch", on June 30, 1476, is depicted in a painting by Francisco de Mendieta popularly known as El besamanos ("The Royal audience"). On July 3, 1875, during the Carlist Wars, the pretender to the throne Don Carlos also visited Guernica and swore the oath. Throughout the 19th century, there were frequent meetings under the Tree, including both General Assemblies and other political events. By the 18th century, there was a square at the centre of the town, flanked by the town hall, a public gaol housing prisoners from all over the Lordship of Biscay, a hospital and a poor-house for local people. Day-to-day life comprised agriculture (growing of cereals, vegetable and fruit), crafts (menders, tailors, cobblers, flax manufacturers) and trade (transportation and sale of goods and produce). This was also a time of continual conflicts with the neighbouring parish of Lumo over disputed land. These disputes were not finally settled until 1882, when the two parishes joined together to form Gernika-Lumo. The first industrial concerns were set up in the early years of the 20th century. This encouraged population growth, and the town grew from 4,500 inhabitants in 1920 to 6,000 in 1936. On April 26, 1937, during the Spanish Civil War, Guernica was the scene of the Bombing of Guernica by the Condor Legion of Nazi Germany's "Luftwaffe" and the Italian Aviazione Legionaria. According to official Basque figures, 1,654 civilians were killed, but German sources report a round figure of 300 civilians killed in the bombing, according to the German "Bundeswehr Magazine" (published in April 2007, page 94). The raid was requested by Francisco Franco to aid in his overthrowing the Basque Government and the Spanish Republican government. The town was devastated, though the Biscayan assembly and the Oak of Guernica survived. The Bombing of Guernica, that went on continuously for three hours, is considered to be the beginning of the Luftwaffe doctrine of terror bombing, where civilian targets were selected to demoralize the enemy. Pablo Picasso painted his famous "Guernica" painting to commemorate the horrors of the bombing and René Iché made a violent sculpture the day after the bombing. It has inspired musical compositions by Octavio Vazquez ("Gernika" Piano Trio) and René-Louis Baron, and poems by Paul Eluard ("Victory of Guernica"), and Uys Krige ("Nag van die Fascistiese Bomwerpers") (English translation from the Afrikaans: "Night of the Fascist Bombers"). There is also a short film from 1950, by Alain Resnais, titled "Guernica". Celebrations were staged in 1966 to mark the 600th anniversary of the founding of the town. As part of these celebrations, a statue of Count Tello, made by local sculptor Agustín Herranz, was set up in the Fueros Square. At present, Gernika-Lumo has 16,244 (2009) inhabitants. It is a town with a prosperous service sector, and is also home to industrial companies, as well as good cultural and educational amenities. Guernica is historically the seat of the parliament of the province of Biscay, whose executive branch is located in nearby Bilbao. In prior centuries, Lumo had been the meeting place of the traditional Biscayan assembly, Urduña and chartered towns like Guernica were under the direct authority of the Lord of Biscay, and Enkarterri and the Durango area had separate assemblies. All would hold assemblies under local big trees. As time passed, the role of separate assemblies was superseded by the single assembly in Guernica, and by 1512, its oak, known as the Gernikako Arbola, became symbolic of the traditional rights of the Basque people as a whole. The trees are always renewed from their own acorns. One of these trees (the "Old Tree") lived until the 19th century, and may be seen, as a dry stump, near the assembly house. A tree planted in 1860 to replace it died in 2004 and was in turn replaced; the sapling that had been chosen to become the official Oak of Guernica is also sick so the tree will not be replaced until the earth around the site has been restored to health. A hermitage was built beside the Gernikako Arbola to double as an assembly place, followed by the current house of assembly ("Biltzar Jauregia" in Basque), built in 1826. On April 26, 1937, during the Spanish Civil War, the town was razed to the ground by German aircraft belonging to the Condor Legion, sent by Hitler to support Franco's troops. For almost four hours bombs rained down on Guernica in an "experiment" for the blitzkrieg tactics and bombing of civilians seen in later wars. In 1987 the 50th anniversary of the bombing was commemorated as the town hosted the Preliminary Congress of the World Association of Martyr Cities. The full congress was held subsequently in Madrid, bringing together representatives of cities all over the world. Since then, Gernika-Lumo has been a member of this association. 1988 saw the setting up of the monument "Gure Aitaren Etxea", by Basque sculptor Eduardo Chillida, and in 1990 "Large Figure in a Shelter", by British sculptor Henry Moore, was erected beside it. These monuments are symbolic of Gernika-Lumo as a city of peace. As part of the "Symbol for Peace" movement, Gernika has twinned with several towns, including Berga (Catalonia – 1986), Pforzheim (Germany – 1988) and Boise, Idaho (United States – 1993). The twinning agreements include co-operation in the fields of culture, education and industry. There is a popular saying in Guernica which runs as follows: "lunes gerniqués, golperik ez". This translates roughly as "not a stroke of work gets done on Mondays". The Monday market day has for decades been considered as a holiday in the town. People would flock to Guernica not just from the immediate vicinity, but from all over the province, so that the town was packed. They came not just to buy or sell at the produce market, but also to eat at the town's renowned restaurants and afterwards perhaps to watch a pelota game at the local court. The Monday market has been fulfilling its age-old function of bringing people together since the times when people could not afford to travel far and it provided them with a chance to socialise. The bombing of Guernica by Nazi Germany's "Luftwaffe" and the Italian Aviazione Legionaria was deliberately chosen to occur on a Monday (April 26, 1937), because it was known that the Basque people who lived outside of Guernica proper would travel into town for the Market Day, thus affording the pilots of the German and Italian aircraft the opportunity to murder as many innocent people as possible. Jai alai (cesta-punta) is a form of pelota. The Guernica jai alai court is the biggest operational court of its type in the world. It was designed by Secundino Zuano, one of Spain's leading architects of the 20th century and first opened in 1963. It is acknowledged by players of the game to be the world's finest court. Bare-handed pelota games are held at the Santanape court. This is the most popular form of the sport.
https://en.wikipedia.org/wiki?curid=12646
Gerrit Rietveld Gerrit Thomas Rietveld (; 24 June 1888 – 25 June 1964) was a Dutch furniture designer and architect. One of the principal members of the Dutch artistic movement called De Stijl, Rietveld is famous for his Red and Blue Chair and for the Rietveld Schröder House, which is a UNESCO World Heritage Site. Gerrit Thomas Rietveld was born in Utrecht on 24 June 1888 as the son of a joiner. He left school at 11 to be apprenticed to his father and enrolled at night school before working as a draughtsman for C. J. Begeer, a jeweller in Utrecht, from 1906 to 1911. By the time he opened his own furniture workshop in 1917, Rietveld had taught himself drawing, painting and model-making. He afterwards set up in business as a cabinet-maker. Rietveld designed his Red and Blue Chair in 1917 which has become an iconic piece of modern furniture. Hoping that much of his furniture would eventually be mass-produced rather than handcrafted, Rietveld aimed for simplicity in construction. In 1918, he started his own furniture factory, and changed the chair's colours after becoming influenced by the "De Stijl" movement, of which he became a member in 1919, the same year in which he became an architect. The contacts that he made at "De Stijl" gave him the opportunity to exhibit abroad as well. In 1923, Walter Gropius invited Rietveld to exhibit at the Bauhaus. He built the Rietveld Schröder House, in 1924, in close collaboration with the owner Truus Schröder-Schräder. Built in Utrecht on the Prins Hendriklaan 50, the house has a conventional ground floor, but is radical on the top floor, lacking fixed walls but instead relying on sliding walls to create and change living spaces. The house has been a UNESCO World Heritage Site since 2000. His involvement in the Schröder House exerted a strong influence on Truus' daughter, Han Schröder, who became one of the first female architects in the Netherlands. Rietveld broke with "De Stijl" in 1928 and became associated with a more functionalist style of architecture, known as either Nieuwe Zakelijkheid or Nieuwe Bouwen. The same year he joined the Congrès Internationaux d'Architecture Moderne. From the late 1920s he was concerned with social housing, inexpensive production methods, new materials, prefabrication and standardisation. In 1927 he was already experimenting with prefabricated concrete slabs, a very unusual material at that time. In the 1920s and 1930s, however, all his commissions came from private individuals, and it was not until the 1950s that he was able to put his progressive ideas about social housing into practice, in projects in Utrecht and Reeuwijk. Rietveld designed the Zig-Zag Chair in 1934 and started the design of the Van Gogh Museum in Amsterdam, which was finished after his death. In 1951 Rietveld designed a retrospective exhibition about "De Stijl" which was held in Amsterdam, Venice and New York. Interest in his work revived as a result. In subsequent years he was given many commissions, including the Dutch pavilion for the Venice Biennale (1953), the art academies in Amsterdam and Arnhem, and the press room for the UNESCO building in Paris. Designed for the display of small sculptures at the Third International Sculpture Exhibition in Arnhem's Sonsbeek Park in 1955, Rietveld's ‘Sonsbeek Pavilion’ was rebuilt at the Kröller-Müller Museum in 1965. Due to irreparable damages caused by regular decay, it was once again rebuilt, this time with new materials, in 2010. In order to handle all these projects, in 1961 Rietveld set up a partnership with the architects Johan van Dillen and J. van Tricht built hundreds of homes, many of them in the city of Utrecht. His work was neglected when rationalism came into vogue, but he later benefited from a revival of the style of the 1920s thirty years later. Rietveld died on 25 June 1964 in Utrecht. His son Wim Rietveld also became a renowned industrial designer. Rietveld had his first retrospective exhibition devoted to his architectural work at the Central Museum, Utrecht, in 1958. When the art academy in Amsterdam became part of the higher professional education system in 1968 and was given the status of an Academy for Fine Arts and Design, the name was changed to the Gerrit Rietveld Academy in honour of Rietveld. "Gerrit Rietveld: A Centenary Exhibition" at the Barry Friedman Gallery, New York, in 1988 was the first comprehensive presentation of the Dutch architect's original works ever held in the U.S. The highlight of a celebratory “Rietveld Year” in Utrecht, the exhibition “Rietveld’s Universe” opened at the Centraal Museum and compared him and his work with famous contemporaries like Wright, Le Corbusier and Mies van der Rohe. Two software tools, both for code review, have been named after Gerrit Rietveld: Gerrit and Rietveld.
https://en.wikipedia.org/wiki?curid=12648
Gary, Indiana Gary is a city in Lake County, Indiana, United States, from downtown Chicago, Illinois. Gary is adjacent to the Indiana Dunes National Park and borders southern Lake Michigan. Gary was named after lawyer Elbert Henry Gary, who was the founding chairman of the United States Steel Corporation. The city is known for its large steel mills and as the birthplace of the Jackson 5 music group. The population of Gary was 80,294 at the 2010 census, making it the ninth-largest city in the state of Indiana. Once a prosperous steel town, it has suffered drastic population loss due to overseas competition and restructuring of the industry, falling by 55 percent from its peak of 178,320 in 1960. As with many Rust Belt cities, it suffers from unemployment, decaying infrastructure, and low literacy and educational attainment levels. It is estimated that nearly one-third of all houses in the city are unoccupied or abandoned. Gary, Indiana, was founded in 1906 by the United States Steel Corporation as the home for its new plant, Gary Works. The city was named after lawyer Elbert Henry Gary, who was the founding chairman of the United States Steel Corporation. Gary was the site of civil unrest in the steel strike of 1919. On October 4, 1919, a riot broke out on Broadway, the main north–south street through downtown Gary, between striking steel workers and strike breakers brought in from outside. Three days later, Indiana governor James P. Goodrich declared martial law. Shortly thereafter, over 4,000 federal troops under the command of Major General Leonard Wood arrived to restore order. The jobs offered by the steel industry provided Gary with very rapid growth and a diverse population within the first 26 years of its founding. According to the 1920 United States Census, 29.7% of Gary's population at the time was classified as foreign-born, mostly from eastern European countries, with another 30.8% classified as native-born with at least one foreign-born parent. By the 1930 United States Census, the first census in which Gary's population exceeded 100,000, the city was the fifth largest in Indiana and comparable in size to South Bend, Fort Wayne, and Evansville. At that time, 78.7% of the population was classified as white, with 19.3% of the population was classified as foreign-born and another 25.9% as native-born with at least one foreign-born parent. In addition to white internal migrants, Gary had attracted numerous African-American migrants from the South in the Great Migration, and 17.8% of the population was classified as black. 3.5% was classified as Mexican (now likely to be identified as Hispanic, as some were likely American citizens in addition to immigrants). Gary's fortunes have risen and fallen with those of the steel industry. The growth of the steel industry brought prosperity to the community. Broadway was known as a commercial center for the region. Department stores and architecturally significant movie houses were built in the downtown area and the Glen Park neighborhood. In the 1960s, like many other American urban centers reliant on one particular industry, Gary entered a spiral of decline. Gary's decline was brought on by the growing overseas competitiveness in the steel industry, which had caused U.S. Steel to lay off many workers from the Gary area. The U.S. Steel Gary Works employed over 30,000 in 1970, declined to just 6,000 by 1990, and further declined to 5,100 in August 2015. Attempts to shore up the city's economy with major construction projects, such as a Holiday Inn hotel and the Genesis Convention Center, failed to reverse the decline. Rapid racial change occurred in Gary during the late 20th century. These population changes resulted in political change which reflected the racial demographics of Gary: the non-white share of the city's population increased from 21% in 1930, 39% in 1960, to 53% in 1970. Non-whites were primarily restricted to live in the Midtown section just south of downtown (per the 1950 Census, 97% of the black population of Gary was living in this neighborhood). Gary had one of the nation's first African-American mayors, Richard G. Hatcher, and hosted the ground-breaking 1972 National Black Political Convention. In the late 1990s and early 2000s, Gary had the highest percentage of African-Americans of U.S. cities with a population of 100,000 or more, 84% (as of the 2000 U.S. census). This no longer applies to Gary since the population of the city has now fallen well below 100,000 residents. As of 2013, the Gary Department of Redevelopment has estimated that one-third of all homes in the city are unoccupied and/or abandoned. U.S. Steel continues to be a major steel producer, but with only a fraction of its former level of employment. While Gary has failed to reestablish a manufacturing base since its population peak, two casinos opened along the Gary lakeshore in the 1990s, although this has been aggravated by the state closing of Cline Avenue, an important access to the area. Today, Gary faces the difficulties of a Rust Belt city, including unemployment, decaying infrastructure, and low literacy and educational attainment levels. Gary has closed several of its schools within the last ten years. While some of the school buildings have been reused, most remain unused since their closing. As of 2014, Gary is considering closing additional schools in response to budget deficits. Gary chief of police Thomas Houston was convicted of excessive force and abuse of authority in 2008; he died in 2010 while serving a three-year, five-month federal prison sentence. In April 2011, 75-year-old mayor Rudy Clay announced that he would suspend his campaign for reelection as he was being treated for prostate cancer. He endorsed rival Karen Freeman-Wilson, who won the Democratic mayoral primary in May 2011. Freeman-Wilson won election with 87 percent of the vote and her term began in January 2012; she is the first woman elected mayor in the city's history. She was reelected in 2015. She was defeated in her bid for a third term in the 2019 Democratic primary by Lake County Assessor Jerome Prince. Since no challengers filed for the November 2019 general election, Prince's nomination is effectively tantamount to election, and officially succeeded Freeman-Wilson on January 1, 2020, two days after he was sworn in as the city's 21st mayor on December 30, 2019. The following single properties and national historic districts are listed on the National Register of Historic Places: Downtown Gary is separated by Broadway into two distinctive communities. Originally, the City of Gary consisted of The East Side, The West Side, The South Side (the area south of the train tracks near 9th Avenue), and Glen Park, located further South along Broadway. The East Side was demarcated by streets named after the States in order of their acceptance into the Union. This area contained mostly wood-frame houses, some of the earliest in the city, and became known in the 20th century for its ethnic populations from Europe and large families. The single-family houses had repeating house designs that alternated from one street to another, with some streets looking very similar. Among the East Side's most notable buildings were Memorial Auditorium (a large red-brick and stone civic auditorium and the site of numerous events, concerts and graduations), The Palace Theater, Emerson School, St. Luke's Church, H.C. Gordon & Sons, and Goldblatt's Department stores, in addition to the Fair Department Store. All fronted Broadway as the main street that divided Gary. The West Side of Gary, or West of Broadway, the principal commercial street, had streets named after the presidents of the United States in order of their election. Lytton's, Hudson's ladies store, J.C. Penney, and Radigan Bros Furniture Store developed on the west side of Broadway. Developed later, this side of town was known for its masonry or brick residences, its taller and larger commercial buildings, including the Gary National Bank Building, Hotel Gary (now Genesis Towers), The Knights of Columbus Hotel & Building (now a seniors building fronting 5th Avenue), the Tivoli Theater (demolished), the U.S. Post Office, Main Library, Mercy and Methodist Hospitals and Holy Angels Cathedral and School. The West Side also had a secondary principal street, Fifth Avenue, which was lined with many commercial businesses, restaurants, theaters, tall buildings, and elegant apartment buildings. The West Side was viewed as having wealthier residents. The houses dated from about 1908 to the 1930s. Much of the West Side's housing were for executives of U.S. Steel and other prominent businessmen. Notable mansions were 413 Tyler Street and 636 Lincoln Street. Many of the houses were on larger lots. By contrast, a working-class area was made up of row houses made of poured concrete were arranged together and known as "Mill Houses"; they were built to house steel mill workers. The areas known as Emerson and Downtown West combine to form Downtown Gary. It was developed in the 1920s and houses several pieces of impressive architecture, including the Moe House, designed by Frank Lloyd Wright, and another, the Wynant House (1917), which was destroyed by fire. A significant number of older structures have been demolished in recent years because of the cost of restoration. Restructuring of the steel and other heavy industry in the late 20th century resulted in a loss of jobs, adversely affecting the city. Abandoned buildings in the downtown area include historic structures such as Union Station, the Palace Theater, and City Methodist Church. A large area of the downtown neighborhood (including City Methodist) was devastated by a major fire on October 12, 1997. Interstate 90 was constructed between downtown Gary and the United States Steel plant. Ambridge Mann is a neighborhood located on Gary's near west side along 5th Avenue. Ambridge was developed for workers at the nearby steel plant in the 1910s and 1920s. It is named after the American Bridge Works, which was a subsidiary of U.S. Steel. The neighborhood is home to a huge stock of prairie-style and art deco homes. The Gary Masonic Temple is located in the neighborhood, along with the Ambassador apartment building. Located just south of Interstate 90, the neighborhood can be seen while passing Buchanan Street. Brunswick is located on Gary's far west side. The neighborhood is located just south of Interstate 90 and can also be seen from the expressway. The Brunswick area includes the Tri-City Plaza shopping center on West 5th Avenue (U.S. 20). The area is south of the Gary Chicago International Airport. Downtown West is located in north-central Gary on the west side of Broadway just south of Interstate 90. The Genesis Convention Center, the Gary Police Department, the Lake Superior Court House, and the Main Branch of the Gary Public Library are located along 5th Avenue. A new 123-unit mixed-income apartment development was built using a HUD Hope VI grant in 2006. The Adam Benjamin Metro Center is located just north of 4th Avenue. It is operated by the Gary Public Transportation Corporation and serves as a multi-modal hub. It serves both as the Downtown Gary South Shore train station and an intercity bus stop. Tolleston is one of Gary's oldest neighborhoods, predating much of the rest of the city. It was platted by George Tolle in 1857, when the railroads were constructed to this area. This area is west of Midtown and south of Ambridge Mann. Tarrytown is a subdivision located in Tolleston between Whitcomb Street and Clark Road. Black Oak is located on the far southwest side of Gary, in the vicinity of the Burr Street exit to the Borman Expressway. It was annexed in the 1970s. Prior to that, Black Oak was an unincorporated area informally associated with Hammond, and the area has Hammond telephone numbers. After three referendums, the community voters approved annexation, having been persuaded by Mayor Hatcher that they would benefit more from services provided by the city than from those provided by the county. In the 20th-century, it is the only majority-white neighborhood in Gary. Glen Park is located on Gary's far south side and is made up mostly of mid-twentieth-century houses. Glen Park is divided from the remainder of the city by the Borman Expressway. The northern portion of Glen Park is home to Gary's Gleason Park Golf Course and the campus of Indiana University Northwest. The far western portion of Glen Park is home to the Village Shopping Center. Glen Park includes the 37th Avenue corridor at Broadway. Midtown is located south of Downtown Gary, along Broadway. In the pre-1960s days of "de facto" segregation, this developed historically as a "black" neighborhood as African Americans came to Gary from the rural South in the Great Migration to seek jobs in the industrial economy. Aetna is located on Gary's far east side along the Dunes Highway. Aetna predates the city of Gary. This company town was founded in 1881 by the Aetna Powder Works, an explosives company. Their factory closed after the end of World War I. The Town of Aetna was annexed by Gary in 1928, around the same time that the city annexed the Town of Miller. In the late 1920s and early 1930s, Gary's prosperous industries helped generate residential and other development in Aetna, resulting in an impressive collection of art deco architecture. The rest of the community was built after World War II and the Korean War in the 1950s, in a series of phases. On its south and east, Aetna borders the undeveloped floodplain of the Little Calumet River. Emerson is located in north-central Gary on the east side of Broadway. Located just south of Interstate 90, Gary City Hall is located in Emerson, along with the Indiana Department of Social Services building and the Calumet Township Trustee's office. A 6,000-seat minor league baseball stadium for the Gary SouthShore RailCats, U.S. Steel Yard, was constructed in 2002, along with contiguous commercial space and minor residential development. Miller Beach, also known simply as Miller, is on Gary's far northeast side. Settled in the 1850s and incorporated as an independent town in 1907, Miller was annexed by the city of Gary in 1918. Miller developed around the old stagecoach stop and train station known by the 1850s as Miller's Junction and/or Miller's Station. Miller Beach is racially and economically diverse. It attracts investor interest due to the many year-round and summer homes within walking distance of Marquette Park and Lake Michigan. Prices for lakefront property are affordable compared to those in Illinois suburban communities. Lake Street provides shopping and dining options for Miller Beach visitors and residents. East Edge, a development of 28 upscale condominium, townhome, and single-family homes, began construction in 2007 at the eastern edge of Miller Beach along County Line Road, one block south of Lake Michigan. The city is located at the southern end of the former lake bed of the prehistoric Lake Chicago and the current Lake Michigan. Most of the city's soil, to nearly one foot below the surface, is pure sand. The sand beneath Gary, and on its beaches, is of such volume and quality that for over a century companies have mined it, especially for the manufacture of glass. According to the 2010 census, Gary has a total area of , of which (or 87.22%) is land and (or 12.78%) is water. Gary is "T" shaped, with its northern border on Lake Michigan. At the northwesternmost section, Gary borders Hammond and East Chicago. Miller Beach, its easternmost neighborhood, borders Lake Station and Portage. Gary's southernmost section borders Griffith, Hobart, Merrillville, and unincorporated Ross. Gary is about from the Chicago Loop. Gary is listed by the Köppen-Geiger climate classification system as humid continental (Dfa). In July and August, the warmest months, high temperatures average 84 °F (29 °C) and peak just above 100 °F (38 °C), and low temperatures average 63 °F (17 °C). In January and February, the coldest months, high temperatures average around 29 °F (−2 °C) and low temperatures average 13 °F (−11 °C), with at least a few days of temperatures dipping below 0 °F (−18 °C). The weather of Gary is greatly regulated by its proximity to Lake Michigan. Weather varies yearly. In summer months Gary is humid. The city's yearly precipitation averages about 40 inches. Summer is the rainiest season. Winters vary but are predominantly snowy. Snowfall in Gary averages approximately 25 inches per year. Sometimes large blizzards hit because of "lake effect snow", a phenomenon whereby large amounts of water evaporated from the lake deposit onto the shoreline areas as inordinate amounts of snow. The change in the economy and resulting loss of jobs has caused a drop in population by more than half since its peak in 1960. As of the census of 2010, there were 80,294 people, 31,380 households, and 19,691 families residing in the city. The population density was . There were 39,531 housing units at an average density of . The racial makeup of the city was 84.8% African American, 10.7% White, 0.3% Native American, 0.2% Asian, 1.8% from other races, and 2.1% from two or more races. Hispanic or Latino people of any race were 5.1% of the population. Non-Hispanic Whites were 8.9% of the population in 2010, down from 39.1% in 1970. There were 31,380 households, of which 33.5% had children under the age of 18 living with them, 25.2% were married couples living together, 30.9% had a female householder with no husband present, 6.7% had a male householder with no wife present, and 37.2% were non-families. 32.8% of all households were made up of individuals, and 11.9% had someone living alone who was 65 years of age or older. The average household size was 2.54 and the average family size was 3.23. The median age in the city was 36.7 years. 28.1% of residents were under the age of 18; 8.6% were between the ages of 18 and 24; 21.8% were from 25 to 44; 27.1% were from 45 to 64; and 14.5% were 65 years of age or older. The gender makeup of the city was 46.0% male and 54.0% female. As of the census of 2000, there were 102,746 people, 38,244 households, and 25,623 families residing in the city. The population density was 2,045.5 people per square mile (789.8/km2). There were 43,630 housing units at an average density of 868.6 per square mile (335.4/km2). The racial makeup of the city was 84.03% African American, 11.92% White, 0.21% Native American, 0.14% Asian, 0.02% Pacific Islander, 1.97% from other races, and 1.71% from two or more races. Hispanic or Latino people of any race were 4.93% of the population. There were 38,244 households, out of which 31.2% had children under the age of 18 living with them, 30.2% were married couples living together, 30.9% had a female householder with no husband present, and 33.0% were non-families. 28.9% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.66 and the average family size was 3.28. In the city, the population was spread out, with 29.9% under the age of 18, 10.1% from 18 to 24, 25.1% from 25 to 44, 22.2% from 45 to 64, and 12.8% who were 65 years of age or older. The median age was 34 years. For every 100 females, there were 84.6 males. For every 100 females age 18 and over, there were 78.0 males. The median income for a household in the city was $27,195, and the median income for a family was $32,205. Males had a median income of $34,992 versus $24,432 for females. The per capita income for the city was $14,383. About 22.2% of families and 25.8% of the population were below the poverty line, including 37.9% of those under age 18 and 14.1% of those age 65 or over. Meredith Willson's 1957 Broadway musical "The Music Man" featured the song "Gary, Indiana", in which lead character (and con man) Professor Harold Hill wistfully recalls his purported hometown, then prosperous. Hill claims to be an alumnus of "Gary Conservatory of Music, Class of '05," but this is later revealed to be another of his lies. The City of Gary was not founded until 1906. Willson's musical, set in 1912, was adapted both as a film of the same name released in 1962, and as a television film, produced in 2003. The 1996 urban film "Original Gangstas" was filmed in the city. It starred Gary native Fred Williamson, Pam Grier, Jim Brown, Richard Roundtree, and Isabel Sanford, among others. Since the early 2000s, Gary has been the setting for numerous films made by Hollywood filmmakers. In 2009, scenes for the remake of "A Nightmare on Elm Street" were filmed in Gary. Scenes from "" wrapped up filming on August 16, 2010. The History Channel documentary "Life After People" was filmed in Gary, exploring areas that have deteriorated or been abandoned because of the loss of jobs and residents. The Gary Public Library System consists of the main library at 220 West 5th Avenue and several branches: Brunswick Branch, W. E. B. DuBois Branch, J. F. Kennedy Branch, Tolleston Branch, and Woodson Branch. In March 2011, the Gary Library Board voted to close the main library on 5th Avenue and the Tolleston branch in what officials said was their best economic option. The main library closed at the end of 2011. The building now houses a museum. Lake County Public Library operates the Black Oak Branch at 5921 West 25th Avenue in the Gary city limits. In addition, Indiana University Northwest operates the John W. Anderson Library on its campus. The following sports franchises are based in Gary: Three school districts serve the city, and multiple charter schools are located within the city. Most public schools in Gary are administered by the Gary Community School Corporation. The other public schools within the city are administered by Lake Ridge Schools Corporation, which is the school system for the Black Oak neighborhood and unincorporated Calumet Township. Due to annexation law, Black Oak residents retained their original school system and were not required to attend Gary public schools. Charter schools in Indiana, including those in Gary, are granted charters by one of a small number of chartering institutions. Indiana charter schools are generally managed in cooperation between the chartering institution, a local board of parents and community members, salaried school administrators, and a management company. Charter schools in Gary as of 2011 include Thea Bowman Leadership Academy, Charter School of the Dunes, Gary Lighthouse Charter School (formerly, Blessed Sacrament Parish and Grade School), and 21st Century Charter. Gary is home to two regional state college campuses: Gary is served by two major newspapers based outside the city, and by a Gary-based, largely African-American interest paper. These papers provide regional topics, and cover events in Gary. Gary is served by five local broadcasters plus government access and numerous Chicago area radio and stations, and by other nearby stations in Illinois and Indiana. Gary is served by the Gary Police Department and the Lake County Sheriff. The Gary Fire Department (GFD) provides fire protection and emergency medical services to the city of Gary. Gary is the hometown of The Jackson 5, a family of musicians who influenced the sound of modern popular music. In 1950, Joseph and Katherine Jackson moved from East Chicago, Indiana into their two-bedroom house at 2300 Jackson Street. They had married on November 5, 1949. Their entertainer children later recorded a song entitled "2300 Jackson Street" (1989). The Jackson children include:
https://en.wikipedia.org/wiki?curid=12650
Gregory the Illuminator Gregory the Illuminator (classical ; "Grigor Lusavorich") ( – ) is the patron saint and first official head of the Armenian Apostolic Church. He was a religious leader who is credited with converting Armenia from paganism to Christianity in 301. Armenia thus became the first nation to adopt Christianity as its official religion. Gregory was the son of the Armenian Parthian nobles Anak the Parthian and Okohe. His father, Anak, was a Prince said to be related to the Arsacid Kings of Armenia or was from the House of Suren, one of the seven branches of the ruling Arsacid dynasty of Sakastan. Anak was charged with assassinating Khosrov II, one of the kings of the Arsacid dynasty and was put to death. Gregory narrowly escaped execution with the help of Sopia and Yevtagh, his caretakers. He was taken to Caesarea in Cappadocia where Sopia and Yevtagh hoped to raise him. Gregory was given to the Christian Holy Father Phirmilianos (Euthalius) to be educated and was brought up as a devout Christian. Upon coming of age, Gregory married a woman called Miriam, a devout Christian who was the daughter of a Christian Armenian prince in Cappadocia. From their union, Miriam bore Gregory two children, their sons Vrtanes and Aristaces. Through Vrtanes, Gregory and Miriam would have further descendants and when Gregory died, Aristaces succeeded him. At some point, Miriam and Gregory separated in order that Gregory might take up a monastic life. Gregory left Cappadocia and went to Armenia in the hope of atoning for his father's crime by evangelizing his homeland. At that time Tiridates III, son of the late King Khosrov II, reigned. Influenced partly by the fact that Gregory was the son of his father's enemy, he ordered Gregory imprisoned for twelve (some sources indicate fourteen) years in a pit on the Ararat Plain under the present day church of Khor Virap located near the historical city Artashat in Armenia. Gregory was eventually called forth from his pit in c. 297 to restore to sanity Tiridates III, who had lost all reason after he was betrayed by Roman emperor Diocletian. Diocletian invaded and vast amounts of territory from western provinces of Greater Armenia became protectorates of Rome. In 301 Gregory baptized Tiridates III along with members of the royal court and upper class as Christians. Tiridates III issued a decree by which he granted Gregory full rights to begin carrying out the conversion of the entire nation to the Christian faith. The same year Armenia became the first country to adopt Christianity as its state religion. The newly built cathedral, the Mother Church in Etchmiadzin became and remains the spiritual and cultural center of Armenian Christianity and center of the Armenian Apostolic Church. Most Armenians were baptized in the Aratsani (upper Euphrates) and Yeraskh (Arax) rivers. Two princes from Ujjain, India had founded a large kingdom in Armenia in 165 BCE and they established 22 cities covering most of modern Armenia including two Hindu temples dedicated to Gisane and Demetr. According to the account of Zenob Glak, one of the first disciples of Gregory the Illuminator, the temples were destroyed and the priests killed along with 1038 defenders of the raid that was ordered by Gregory. Many of the pre-Christian (traditional Indo-European) festivals and celebrations such as Tyarndarach (Trndez, associated with fire worship) and Vardavar or Vadarvar associated with water worship, that dated back thousands of years, were preserved and continued in the form of Christian celebrations and chants. In 302, Gregory received consecration as Patriarch of Armenia from Leontius of Caesarea, his childhood friend. In 318, Gregory appointed his second son Aristaces as the next Catholicos in line of Armenia's Holy Apostolic Church to stabilize and continue strengthening Christianity not only in Armenia, but also in the Caucasus. Gregory also placed and instructed his grandson Gregory (one of the sons of Vrtanes) in charge of the holy missions to the peoples and tribes of all of the Caucasus and Caucasian Albania; the younger man was martyred by a fanatical mob while preaching in Albania. In his later years, Gregory withdrew to a small sanctuary near Mount Sebuh (Mt. Sepuh) in the Daranali province (Manyats Ayr, Upper Armenia) with a small convent of monks, where he remained until his death. After his death his corpse was removed to the village of Thodanum (T'ordan, modern Doğanköy, near Erzincan). His remains were scattered near and far in the reign of the Eastern Roman Emperor Zeno. His head is believed to be now in Armenia, his left hand at Mother See of Holy Etchmiadzin, and his right at the Holy See of Cilicia in Antelias, Lebanon. In the 8th century, the iconoclast decrees in Greece caused a number of religious orders to flee the Byzantine Empire and seek refuge elsewhere. San Gregorio Armeno in Naples was built in that century over the remains of a Roman temple dedicated to Ceres, by a group of nuns escaping from the Byzantine Empire with the relics of Gregory. A number of prayers, and about thirty of the canonical hymns of the Armenian Church, are ascribed to Gregory the Illuminator. Homilies of his appeared for the first time in a work called "Haschacnapadum" at Constantinople in 1737; a century afterwards a Greek translation was published at Venice by the Mekhiterists; and they have since been edited in German by J M Schmid (Ratisbon, 1872). The original authorities for Gregory's life are Agathangelos, whose "History of Tiridates" was published by the Mekhitarists in 1835; Moses of Chorene, "Historiae Armenicae"; and Symeon the Metaphrast. A "Life of Gregory" by the Vartabed Matthew was published in the Armenian language at Venice in 1749 and was translated into English by the Rev. Father Malan (1868). Gregory is commemorated on September 30 by the Orthodox Church, which styles him "Holy Hieromartyr Gregory, Bishop of Greater Armenia, Equal of the Apostles and Enlightener of Armenia." He is honored with a feast day on the liturgical calendar of the Episcopal Church (USA) on March 23.
https://en.wikipedia.org/wiki?curid=12651
God Emperor of Dune God Emperor of Dune is a science fiction novel by Frank Herbert published in 1981, the fourth in his "Dune" series of six novels. It was ranked as the No. 11 hardcover fiction best seller of 1981 by "Publishers Weekly". Leto II Atreides, the God Emperor, has ruled the universe as a tyrant for 3,500 years after becoming a hybrid of human and giant sandworm in "Children of Dune". The death of all other sandworms, and his control of the remaining supply of the all-important drug melange, has allowed him to keep civilization under his complete command. Leto has been physically transformed into a worm, retaining only his human face and arms, and though he is now seemingly immortal and invulnerable to harm, he is prone to instinct-driven bouts of violence when provoked to anger. As a result, his rule is one of religious awe and despotic fear. Leto has disbanded the Landsraad to all but a few Great Houses; the remaining powers defer to his authority, although they individually conspire against him in secret. The Fremen have long since lost their identity and military power, and have been replaced as the Imperial army by the Fish Speakers, an all-female army who obey Leto without question. He has rendered the human population into a state of trans-galactic stagnation; space travel is non-existent to most people in his Empire, which he has deliberately kept to a near-medieval level of technological sophistication. All of this he has done in accordance with a prophecy divined through precognition that will establish an enforced peace preventing humanity from destroying itself through aggressive behavior. The desert planet Arrakis has been terraformed into a lush forested biosphere, except for a single section of desert retained by Leto for his Citadel. A string of Duncan Idaho gholas have served Leto over the millennia, and Leto has also fostered the bloodline of his twin sister Ghanima. Her descendant Moneo is Leto's majordomo and closest confidante, while Moneo's daughter Siona has become the leader of an Arrakis-based rebellion against Leto. She steals a set of secret records from his archives, not realizing that he has allowed it. Leto intends to breed Siona with the latest Duncan ghola, but is aware that the ghola, moved by his own morality, may try to assassinate him before this can occur. The Ixians send a new ambassador named Hwi Noree to serve Leto, and though he realizes that she has been specifically designed and trained to ensnare him, he cannot resist falling in love with her. She agrees to marry him. Leto tests Siona by taking her out to the middle of the desert. After improperly using her stillsuit to preserve moisture, dehydration forces her to accept Leto's offer of spice essence from his body to replenish her. Awakened to Leto's prophecy, known as the Golden Path, the experience convinces Siona of the importance of the Golden Path. She remains dedicated to Leto's destruction, and an errant rainstorm demonstrates for her his mortal vulnerability to water. Siona and Idaho overcome a searing mutual hatred of each other to plan an assassination. As Leto's wedding procession moves across a high bridge over a river, Siona's associate Nayla destroys the support beams with a lasgun. The bridge collapses and Leto's entourage, including Hwi, plunge to their deaths into the river below. Leto's body rends apart in the water; the sandtrout which are part of his body encyst the water and scurry off, while the worm portion burns and disintegrates on the shore. A dying Leto reveals his secret breeding program among the Atreides to produce a human who is invisible to prescient vision. Siona is the finished result, and she and her descendants will retain this ability. He explains that humanity is now free to scatter throughout the universe, never again to face complete destruction. After revealing the location of his secret spice hoard, Leto dies, leaving Duncan and Siona to face the task of managing the empire. In "God Emperor of Dune", Frank Herbert analyzes the cyclical patterns of human society, as well as humanity's evolutionary drives. Using his ancestral memories, Leto II has knowledge of the entirety of human history and is able to recall the effects and patterns of tyrannical institutions, from the Babylonian Empire through the Jesuits on ancient Earth, and thus builds an empire existing as a complete nexus encompassing all these methods. This galactic empire differs from the historical tyrants in that it is deliberately designed to end in destruction, and is only instituted in the first place as part of a plan to rescue humanity from an absolute destruction which Leto II has foreseen through his oracular visions. Leto II personally explores the emergent effects of civilization, noting that most hierarchical structures are remnants of evolutionary urges toward safety. Thus, by forming a perfectly safe and stable empire, Leto II delivers a message to be felt throughout history. Stylistically, the novel is permeated by quotations from, and speeches by its main character, Leto, to a degree unseen in any of the other "Dune" novels. In part, this stylistic shift is an artifact of how Herbert wrote it: the first draft was written almost entirely in the first-person narrative voice, only being revised in later drafts to insert more third-person narration of events. "God Emperor of Dune" was ranked as the No. 11 hardcover fiction best seller of 1981 by "Publishers Weekly". The "Los Angeles Times" wrote that the novel was "Rich fare ... heavy stuff", and "Time" called it "a fourth visit to distant Arrakis that is every bit as fascinating as the other three—every bit as timely."
https://en.wikipedia.org/wiki?curid=12653
Goonhilly Satellite Earth Station Goonhilly Satellite Earth Station is a large radiocommunication site located on Goonhilly Downs near Helston on the Lizard peninsula in Cornwall, England. Owned by Goonhilly Earth Station Ltd under a 999-year lease from BT Group plc, it was at one time the largest satellite earth station in the world, with more than 25 communications dishes in use and over 60 in total. The site also links into undersea cable lines. Its first dish, Antenna One (dubbed "Arthur"), was built in 1962 to link with Telstar. It was the first open parabolic design and is 25.9 metres (85 feet) in diameter and weighs 1,118 tonnes. After Pleumeur-Bodou Ground Station (Brittany) which received the first live transatlantic television broadcasts from the United States via the Telstar satellite at 0H47 GMT on 11 July 1962, Arthur received his first video in the middle of the same day. It is now a Grade II listed structure and is therefore protected. The site has also played a key role in communications events such as the Muhammad Ali fights, the Olympic Games, the Apollo 11 Moon landing, and 1985's Live Aid concert. The site's largest dish, dubbed "Merlin", has a diameter of 32 metres (105 feet). Other dishes include Guinevere, Tristan, and Isolde after characters in Arthurian legend, much of which takes place in Cornwall. The earth station is powered by the National Grid. If power fails, all essential equipment will run off huge batteries for up to 20 minutes, during which time four one-megawatt diesel generators will take over. The nearby wind generator farm is not part of the complex. On 12 September 2006, BT announced it would shut down satellite operations at Goonhilly in 2008, and move them to Madley Communications Centre in Herefordshire, making that centre BT's only earth station. Until Easter 2010 the site had a visitor centre inside which the Connected Earth gallery told the history of satellite communications. There were many other interactive exhibits, a cafe, a shop and one of Britain's fastest cybercafés (a one gigabit pipe and a theoretical maximum speed per computer of 100 Mbit). There were also tours around the main BT site and into the heart of Arthur. At its prime, the site attracted around 80,000 visitors a year, but in March 2010 BT announced that the visitor centre would be "Closed for Easter and beyond, until further notice." On 11 January 2011 it was announced that part of the site was to be sold to create a space science centre. This would involve upgrading some of the dishes to make them suitable for "deep space communication with spacecraft missions". A new company was formed to manage the operations, Goonhilly Earth Station Ltd. The company leased most of the antennas for at least three years with the option to buy the entire complex in the future. Goonhilly Earth Station Ltd. took ownership of the site in January 2014. There are plans to connect one or more of the Goonhilly dishes into global radio astronomy interferometer networks. There are also plans to upgrade the former visitor centre into "an outreach centre promoting space and space science for visitors, including local residents and schools". In July 2015 European Space Agency has begun a 9-month feasibility study to examine if antenna Goonhilly 6 could be used to support Artemis 1 of the Orion spacecraft. The site is (as of 2017) a partner in the bid by Newquay Airport to become the UK's first Spaceport. In April 2018, Goonhilly became part of a collaboration partnership for commercial lunar mission support services, with the European Space Agency and Surrey Satellite Technology. The agreement calls for the upgrade of Goonhilly, and development of a lunar pathfinder mission. Plans exist for small landers with a lunar mothership providing communications relay.
https://en.wikipedia.org/wiki?curid=12654
Godwin's law Godwin's law (or Godwin's rule of Hitler analogies) is an Internet adage asserting that "as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1". That is, if an online discussion (regardless of topic or scope) goes on long enough, sooner or later someone will compare someone or something to Adolf Hitler or his deeds, the point at which effectively the discussion or thread often ends. Promulgated by the American attorney and author Mike Godwin in 1990, Godwin's law originally referred specifically to Usenet newsgroup discussions. He stated that he introduced Godwin's law in 1990 as an experiment in memetics. It is now applied to any threaded online discussion, such as Internet forums, chat rooms, and comment threads, as well as to speeches, articles, and other rhetoric where "reductio ad Hitlerum" occurs. In 2012, "Godwin's law" became an entry in the third edition of the "Oxford English Dictionary". There are many corollaries to Godwin's law, some considered more canonical (by being adopted by Godwin himself) than others. For example, there is a tradition in many newsgroups and other Internet discussion forums that, when a Hitler comparison is made, the thread is finished and whoever made the comparison loses whatever debate is in progress. This principle is itself frequently referred to as Godwin's law. Godwin's law itself can be abused as a distraction, diversion or even as censorship, fallaciously miscasting an opponent's argument as hyperbole when the comparisons made by the argument are actually appropriate. Mike Godwin himself has also criticized the overapplication of Godwin's law, claiming it does not articulate a fallacy; it is instead framed as a memetic tool to reduce the incidence of inappropriate, hyperbolic comparisons. "Although deliberately framed as if it were a law of nature or of mathematics," Godwin wrote, "its purpose has always been rhetorical and pedagogical: I wanted folks who glibly compared someone else to Hitler to think a bit harder about the Holocaust." In December 2015, Godwin commented on the Nazi and fascist comparisons being made by several articles about Republican presidential candidate Donald Trump, saying: "If you're thoughtful about it and show some real awareness of history, go ahead and refer to Hitler when you talk about Trump, or any other politician." In August 2017, Godwin made similar remarks on social networking websites Facebook and Twitter with respect to the two previous days' Unite the Right rally in Charlottesville, Virginia, endorsing and encouraging efforts to compare its alt-right organizers to Nazis. In October 2018, Godwin said on Twitter that it was acceptable to call Brazilian politician Jair Bolsonaro, who had won the first round and later the second round of the presidential election, a "Nazi". In June 2019, after Chris Hayes invoked Godwin's Law in a discussion of whether it was appropriate to call the United States's refugee detention centers "concentration camps", Godwin explicitly stated his belief that the term "concentration camps" was appropriate.
https://en.wikipedia.org/wiki?curid=12656
General-purpose machine gun A general-purpose machine gun (GPMG) is an air-cooled, fully automatic weapon that can be adapted to light machine gun and medium machine gun roles. A GPMG weapon will typically feature a quick-change barrel, configuration for mounting on bipods, tripods, and vehicles as infantry support weapons, and calibered to fire full-powered rifle cartridges such as the 7.62×51mm NATO, 7.62×54mmR, 7.5×54mm French, 7.5×55mm Swiss, and 7.92×57mm Mauser. The general-purpose machine gun (GPMG) originated with the MG 34, designed in 1934 by Heinrich Vollmer of Mauser on the commission of Nazi Germany to circumvent the restrictions on machine guns imposed by the Treaty of Versailles. It was introduced into the Wehrmacht as an entirely new concept in automatic firepower, dubbed the "Einheitsmaschinengewehr", meaning "universal machine gun" in German. In itself the MG 34 was an excellent weapon for its time: an air-cooled, recoil-operated machine gun that could run through belts of 7.92×57mm Mauser ammunition at a rate of 850 rounds per minute, delivering killing firepower at ranges of more than 1,000 meters. The main feature of the MG 34 is that simply by changing its mount, sights and feed mechanism, the operator could radically transform its function: on its standard bipod it was a light machine gun ideal for infantry assaults; on a tripod it could serve as a sustained-fire medium machine gun; mounting on aircraft or vehicles turned it into an air defence weapon, and it also served as the coaxial machine gun on numerous German tanks. During World War II, the MG 34 was superseded by a new GPMG, the MG 42, although it remained in combat use.
https://en.wikipedia.org/wiki?curid=12664
Gdynia Gdynia ( , ; ; , 1939-1945 "Gotenhafen") is a city in northern Poland. Located on Gdańsk Bay on the southern coast of the Baltic Sea, it is the second largest city in Pomeranian Voivodeship after Gdańsk and a major seaport. Gdynia has a population of c. 246,000, which makes it the twelfth-largest city in Poland. It is part of a conurbation with the spa town of Sopot, the city of Gdańsk and suburban communities, which together form a metropolitan area called the Tricity ("Trójmiasto"), with a population of over 1,000,000 people. Historically and culturally part of Kashubia in Eastern Pomerania, Gdynia for centuries remained a small farming and fishing village. At the beginning of the 20th-century Gdynia became a seaside resort town and experienced an inflow of tourists. This triggered an increase in the local population. After Poland regained its independence in 1918, a decision was made to construct a Polish seaport in Gdynia, between the Free City of Danzig (a semi-autonomous city-state) and German Pomerania, making Gdynia a primary economic hub. In 1926 Gdynia was granted city rights, after which it experienced a rapid demographic and architectural development. It was interrupted by the outbreak of World War II, during which the newly built port and shipyard were completely destroyed. The population of the city suffered heavy losses as most of the inhabitants were evicted and expelled. The locals were either displaced to other regions of occupied Poland or sent to German Nazi concentration camps throughout Europe. After the war, Gdynia was settled with the former inhabitants of Warsaw and lost cities, such as Lviv and Vilnius in the Eastern Borderlands. The city was gradually regenerating, with its shipyard being rebuilt and expanded. In December 1970 the shipyard workers protest against the increase of prices was bloodily repressed. This greatly contributed to the rise of the Solidarity movement in nearby Gdańsk. Today the port of Gdynia is a regular stopover on the itinerary of luxurious passenger ships and a new ferry terminal with a civil airport are under realisation. The city won numerous awards for its safety, infrastructure, quality of life and a rich variety of tourist attractions. In 2013 Gdynia was ranked as Poland's best city to live in and topped the rankings in the category of general quality of life. The area of the later city of Gdynia shared its history with Pomerelia (Eastern Pomerania); in prehistoric times it was the center of Oksywie culture; it was later populated by Slavs with some Baltic Prussian influences. The decision to build a major seaport at Gdynia village was made by the Polish government in winter 1920, in the midst of the Polish–Soviet War (1919–1920). The authorities and seaport workers of the Free City of Danzig felt Poland's economic rights in the city were being misappropriated to help fight the war. German dockworkers went on strike, refusing to unload shipments of military supplies sent from the West to aid the Polish army, and Poland realized the need for a port city it was in complete control of, economically and politically. Construction of Gdynia seaport started in 1921 but, because of financial difficulties, it was conducted slowly and with interruptions. It was accelerated after the Sejm (Polish parliament) passed the "Gdynia Seaport Construction Act" on 23 September 1922. By 1923 a 550-metre pier, of a wooden tide breaker, and a small harbour had been constructed. Ceremonial inauguration of Gdynia as a temporary military port and fishers' shelter took place on 23 April 1923. The first major seagoing ship arrived on 13 August 1923. To speed up the construction works, the Polish government in November 1924 signed a contract with the French-Polish Consortium for Gdynia Seaport Construction. By the end of 1925, they had built a small seven-metre-deep harbour, the south pier, part of the north pier, a railway, and had ordered the trans-shipment equipment. The works were going more slowly than expected, however. They accelerated only after May 1926, because of an increase in Polish exports by sea, economic prosperity, the outbreak of the German–Polish trade war which reverted most Polish international trade to sea routes, and thanks to the personal engagement of Eugeniusz Kwiatkowski, Polish Minister of Industry and Trade (also responsible for the construction of Centralny Okręg Przemysłowy). By the end of 1930 docks, piers, breakwaters, and many auxiliary and industrial installations were constructed (such as depots, trans-shipment equipment, and a rice processing factory) or started (such as a large cold store). Trans-shipments rose from 10,000 tons (1924) to 2,923,000 tons (1929). At this time Gdynia was the only transit and special seaport designed for coal exports. In the years 1931–1939 Gdynia harbour was further extended to become a universal seaport. In 1938 Gdynia was the largest and most modern seaport on the Baltic Sea, as well as the tenth biggest in Europe. The trans-shipments rose to 8.7 million tons, which was 46% of Polish foreign trade. In 1938 the Gdynia shipyard started to build its first full-sea ship, the "Olza". The city was constructed later than the seaport. In 1925 a special committee was inaugurated to build the city; city expansion plans were designed and city rights were granted in 1926, and tax privileges were granted for investors in 1927. The city started to grow significantly after 1928. A new railway station and the Post Office were completed. The State railways extended their lines, built bridges and also constructed a group of houses for their employees. Within a few years houses were built along some of road leading northward from the Free City of Danzig to Gdynia and beyond. Public institutions and private employers helped their staffs to build houses. In 1933 a plan of development providing for a population of 250,000 was worked out by a special commission appointed by a government committee, in collaboration with the municipal authorities. By 1939 the population had grown to over 120,000. The city and seaport were occupied in September 1939 by German troops and renamed "Gotenhafen" after the Goths, an ancient Germanic tribe, who had lived in the area. Some 50,000 Polish citizens, who after 1920 had been brought into the area by the Polish government after the decision to enlarge the harbour was made, were expelled to the General Government. Kashubians who were suspected to support the Polish cause, particularly those with higher education, were arrested and executed. The main place of execution was Piaśnica (Groß Plaßnitz), where about 12,000 were executed. The German gauleiter Albert Forster considered Kashubians of "low value" and did not support any attempts to create a Kashubian nationality. Some Kashubians organized anti-Nazi resistance groups, "Gryf Kaszubski" (later "Gryf Pomorski"), and the exiled "Zwiazek Pomorski" in Great Britain. The harbour was transformed into a German naval base. The shipyard was expanded in 1940 and became a branch of the Kiel shipyard ("Deutsche Werke Kiel A.G."). Gotenhafen became an important base, due to its being relatively distant from the war theater, and many German large ships—battleships and heavy cruisers—were anchored there. During 1942, Dr Joseph Goebbels authorized relocation of to Gotenhafen Harbour as a stand-in for during filming of the German-produced movie "Titanic", directed by Herbert Selpin. The city was also the location of the Nazi "Gotenhafen" subcamp of the Stutthof concentration camp. The seaport and the shipyard both witnessed several air raids by the Allies from 1943 onwards, but suffered little damage. Gotenhafen was used during winter 1944–45 to evacuate German troops and refugees trapped by the Red Army. Some of the ships were hit by torpedoes from Soviet submarines in the Baltic Sea on the route west. The ship sank, taking about 9,400 people with her – the worst loss of life in a single sinking in maritime history. The seaport area was largely destroyed by withdrawing German troops and millions of encircled refugees in 1945 being bombarded by the Soviet military (90% of the buildings and equipment were destroyed) and the harbour entrance was blocked by the German battleship that had been brought to Gotenhafen for major repairs. On March 28, 1945, Gotenhafen was captured by the Soviets and assigned to Polish Gdańsk Voivodeship, who again renamed it Gdynia. In the Polish 1970 protests, worker demonstrations took place at Gdynia Shipyard. Workers were fired upon by the police. The fallen (e.g. Brunon Drywa) became symbolized by a fictitious worker Janek Wiśniewski, commemorated in a song by Mieczysław Cholewa, "Pieśń o Janku z Gdyni". One of Gdynia's important streets is named after Janek Wiśniewski. The same person was portrayed by Andrzej Wajda in his movie "Man of Iron" as Mateusz Birkut. On December 4, 1999, a storm destroyed a huge crane in a shipyard, which was able to lift 900 tons. The climate of Gdynia is an oceanic climate owing to its position of the Baltic Sea, which moderates the temperatures, compared to the interior of Poland. The climate is cool throughout the year and there is a somewhat uniform precipitation throughout the year. Typical of Northern Europe, there is little sunshine during the year. Gdynia is divided into smaller divisions: "dzielnicas" and "osiedles". Gdynia's "dzielnicas" include: Babie Doły, Chwarzno-Wiczlino, Chylonia, Cisowa, Dąbrowa, Działki Leśne, Grabówek, Kamienna Góra, Karwiny, Leszczynki, Mały Kack, Obłuże, Oksywie, Orłowo, Pogórze, Pustki Cisowskie-Demptowo, Redłowo, Śródmieście, Wielki Kack, Witomino-Leśniczówka, Witomino-Radiostacja, Wzgórze Św. Maksymiliana "Osiedles": Bernadowo, Brzozowa Góra, Chwarzno, Dąbrówka, Demptowo, Dębowa Góra, Fikakowo, Gołębiewo, Kacze Buki, Kolibki, Kolonia Chwaszczyno, Kolonia Rybacka, Krykulec, Marszewo, Międzytorze, Niemotowo, Osada Kolejowa, Osada Rybacka, Osiedle Bernadowo, Port, Pustki Cisowskie, Tasza, Wiczlino, Wielka Rola, Witomino, Wysoka, Zielenisz. Gdynia is a relatively modern city. Its architecture includes the 13th century St. Michael the Archangel's Church in Oksywie, the oldest building in Gdynia, and the 17th century neo-Gothic manor house located on Folwarczna Street in Orłowo. The city also holds many examples of early 20th-century architecture, especially monumentalism and early functionalism, and modernism. A good example of modernism is PLO building situated at 10 Lutego Street. The surrounding hills and the coastline attract many nature lovers. A leisure pier and a cliff-like coastline in Kępa Redłowska, as well as the surrounding Nature Reserve, are also popular locations. In the harbour, there are two anchored museum ships, the destroyer and the tall ship frigate "Dar Pomorza". A -long promenade leads from the marina in the city center, to the beach in Redłowo. Most of Gdynia can be seen from Kamienna Góra ( asl) or the viewing point near Chwaszczyno. There are also two viewing towers, one at Góra Donas, the other at Kolibki. Gdynia hosts the Gdynia Film Festival, the main Polish film festival. The International Random Film Festival was hosted in Gdynia in November 2014. Since 2003 Gdynia has been hosting the Open'er Festival, one of the biggest contemporary music festivals in Europe. The festival welcomes many foreign hip-hop, rock and electronic music artists every year. The lineup for 2015 was Mumford and Sons, Of Monsters and Men, The Prodigy, The Vaccines and many more. Another important summer event in Gdynia is the Viva Beach Party, which is a large two-day techno party made on Gdynia's Public Beach and a summer-welcoming concerts CudaWianki. Gdynia also hosts events for the annual Gdańsk Shakespeare Festival. In the summer of 2014 Gdynia hosted Red Bull Air Race World Championship. In 2008, Gdynia made it onto the "Monopoly Here and Now World Edition" board after being voted by fans through the Internet. Gdynia occupies the space traditionally held by Mediterranean Avenue, being the lowest voted city to make it onto the Monopoly Here and Now board, but also the smallest city to make it in the game. All of the other cities are large and widely known ones, the second smallest being Riga. The unexpected success of Gdynia can be attributed to a mobilization of the town's population to vote for it on the Internet. An abandoned factory district in Gdynia was the scene for the survival series "Man vs Wild", season 6, episode 12. The host, Bear Grylls, manages to escape the district after blowing up a door and crawling through miles of sewer. Ernst Stavro Blofeld, the supervillain in the James Bond novels, was born in Gdynia on May 28, 1908, according to "Thunderball". Gdynia is sometimes called "Polish Roswell" due to the alleged UFO crash on January 21, 1959. Sport teams Notable companies that have their headquarters or regional offices in Gdynia: Former: In 2007, 364,202 passengers, 17,025,000 tons of cargo and containers passed through the port. Regular car ferry service operates between Gdynia and Karlskrona, Sweden. The conurbation's main airport, Gdańsk Lech Wałęsa Airport, lays approximately south-west of central Gdynia, and has connections to approximately 55 destinations. It is the third largest airport in Poland. A second General Aviation terminal was scheduled to be opened by May 2012, which will increase the airport's capacity to 5mln passengers per year. Another local airport, (Gdynia-Kosakowo Airport) is situated partly in the village of Kosakowo, just to the north of the city, and partly in Gdynia. This has been a military airport since the World War II, but it has been decided in 2006 that the airport will be used to serve civilians. Work was well in progress and was due to be ready for 2012 when the project collapsed following a February 2014 EU decision regarding Gdynia city funding as constituting unfair competition to Gdańsk airport. In March 2014, the airport management company filed for bankruptcy, this being formally announced in May that year. The fate of some PLN 100 million of public funds from Gdynia remain unaccounted for with documents not being released, despite repeated requests for such from residents to the city president, Wojciech Szczurek. Trasa Kwiatkowskiego links Port of Gdynia and the city with Obwodnica Trójmiejska, and therefore A1 motorway. National road 6 connects Tricity with Słupsk, Koszalin and Szczecin agglomeration. The principal station in Gdynia is Gdynia Główna railway station, and Gdynia has five other railway stations. Local services are provided by the 'Fast Urban Railway,' Szybka Kolej Miejska (Tricity) operating frequent trains covering the Tricity area including Gdańsk, Sopot and Gdynia. Long distance trains from Warsaw via Gdańsk terminate at Gdynia, and there are direct trains to Szczecin, Poznań, Katowice, Lublin and other major cities. In 2011-2015 the Warsaw-Gdańsk-Gdynia route is undergoing a major upgrading costing $3 billion, partly funded by the European Investment Bank, including track replacement, realignment of curves and relocation of sections of track to allow speeds up to , modernization of stations, and installation of the most modern ETCS signalling system, which is to be completed in June 2015. In December 2014 new Alstom Pendolino high-speed trains were put into service between Gdynia, Warsaw and Kraków reducing rail travel times to Gdynia by 2 hours. There are currently 8 universities and institutions of higher education based in Gdynia. Many students from Gdynia attend also universities located in the Tricity. Gdynia is twinned with:
https://en.wikipedia.org/wiki?curid=12665
Gluon A gluon () is an elementary particle that acts as the exchange particle (or gauge boson) for the strong force between quarks. It is analogous to the exchange of photons in the electromagnetic force between two charged particles. In layman's terms, they "glue" quarks together, forming hadrons such as protons and neutrons. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than quantum electrodynamics (QED). The gluon is a vector boson, which means, like the photon, it has a spin of 1. While massive spin-1 particles have three polarization states, massless gauge bosons like the gluon have only two polarization states because gauge invariance requires the polarization to be transverse to the direction that the gluon is traveling. In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass. Experiments limit the gluon's rest mass to less than a few meV/"c"2. The gluon has negative intrinsic parity. Unlike the single photon of QED or the three W and Z bosons of the weak interaction, there are eight independent types of gluon in QCD. This may be difficult to understand intuitively. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons may be thought of as carrying both color and anticolor. This gives nine "possible" combinations of color and anticolor in gluons. The following is a list of those combinations (and their schematic names): These are not the "actual" color states of observed gluons, but rather "effective" states. To correctly understand how they are combined, it is necessary to consider the mathematics of color charge in more detail. It is often said that the stable strongly interacting particles (such as the proton and the neutron, i.e. hadrons) observed in nature are "colorless", but more precisely they are in a "color singlet" state, which is mathematically analogous to a "spin" singlet state. Such states allow interaction with other color singlets, but not with other color states; because long-range gluon interactions do not exist, this illustrates that gluons in the singlet state do not exist either. The color singlet state is: In other words, if one could measure the color of the state, there would be equal probabilities of it being red-antired, blue-antiblue, or green-antigreen. There are eight remaining independent color states, which correspond to the "eight types" or "eight colors" of gluons. Because states can be mixed together as discussed above, there are many ways of presenting these states, which are known as the "color octet". One commonly used list is: These are equivalent to the Gell-Mann matrices. The critical feature of these particular eight states is that they are linearly independent, and also independent of the singlet state, hence 32 − 1 or 23. There is no way to add any combination of these states to produce any other, and it is also impossible to add them to make r, g, or b the forbidden singlet state. There are many other possible choices, but all are mathematically equivalent, at least equally complicated, and give the same physical results. Technically, QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinors in "N"f flavors, each in the fundamental representation (triplet, denoted 3) of the color gauge group, SU(3). The gluons are vectors in the adjoint representation (octets, denoted 8) of color SU(3). For a general gauge group, the number of force-carriers (like photons or gluons) is always equal to the dimension of the adjoint representation. For the simple case of SU("N"), the dimension of this representation is . In terms of group theory, the assertion that there are no color singlet gluons is simply the statement that quantum chromodynamics has an SU(3) rather than a U(3) symmetry. There is no known "a priori" reason for one group to be preferred over the other, but as discussed above, the experimental evidence supports SU(3). The U(1) group for electromagnetic field combines with a slightly more complicated group known as SU(2) – S stands for "special" – which means the corresponding matrices have determinant +1 in addition to being unitary. Since gluons themselves carry color charge, they participate in strong interactions. These gluon-gluon interactions constrain color fields to string-like objects called "flux tubes", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to meters, roughly the size of an atomic nucleus. Beyond a certain distance, the energy of the flux tube binding two quarks increases linearly. At a large enough distance, it becomes energetically more favorable to pull a quark-antiquark pair out of the vacuum rather than increase the length of the flux tube. Gluons also share this property of being confined within hadrons. One consequence is that gluons are not directly involved in the nuclear forces between hadrons. The force mediators for these are other hadrons called mesons. Although in the normal phase of QCD single gluons may not travel freely, it is predicted that there exist hadrons that are formed entirely of gluons — called glueballs. There are also conjectures about other exotic hadrons in which real gluons (as opposed to virtual ones found in ordinary hadrons) would be primary constituents. Beyond the normal phase of QCD (at extreme temperatures and pressures), quark–gluon plasma forms. In such a plasma there are no hadrons; quarks and gluons become free particles. Quarks and gluons (colored) manifest themselves by fragmenting into more quarks and gluons, which in turn hadronize into normal (colorless) particles, correlated in jets. As shown in 1978 summer conferences, the PLUTO detector at the electron-positron collider DORIS (DESY) produced the first evidence that the hadronic decays of the very narrow resonance Υ(9.46) could be interpreted as three-jet event topologies produced by three gluons. Later, published analyses by the same experiment confirmed this interpretation and also the spin 1 nature of the gluon (see also the recollection and PLUTO experiments). In summer 1979, at higher energies at the electron-positron collider PETRA (DESY), again three-jet topologies were observed, now interpreted as q gluon bremsstrahlung, now clearly visible, by TASSO, MARK-J and PLUTO experiments (later in 1980 also by JADE). The spin 1 of the gluon was confirmed in 1980 by TASSO and PLUTO experiments (see also the review). In 1991 a subsequent experiment at the LEP storage ring at CERN again confirmed this result. The gluons play an important role in the elementary strong interactions between quarks and gluons, described by QCD and studied particularly at the electron-proton collider HERA at DESY. The number and momentum distribution of the gluons in the proton (gluon density) have been measured by two experiments, H1 and ZEUS, in the years 1996-2007. The gluon contribution to the proton spin has been studied by the HERMES experiment at HERA. The gluon density in the proton (when behaving hadronically) also has been measured. Color confinement is verified by the failure of free quark searches (searches of fractional charges). Quarks are normally produced in pairs (quark + antiquark) to compensate the quantum color and flavor numbers; however at Fermilab single production of top quarks has been shown (technically this still involves a pair production, but quark and antiquark are of different flavor). No glueball has been demonstrated. Deconfinement was claimed in 2000 at CERN SPS in heavy-ion collisions, and it implies a new state of matter: quark–gluon plasma, less interacting than in the nucleus, almost as in a liquid. It was found at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven in the years 2004–2010 by four contemporaneous experiments. A quark–gluon plasma state has been confirmed at the CERN Large Hadron Collider (LHC) by the three experiments ALICE, ATLAS and CMS in 2010. The Continuous Electron Beam Accelerator Facility at Jefferson Lab, also called the Thomas Jefferson National Accelerator Facility, in Newport News, Virginia, is one of 10 Department of Energy facilities doing research on gluons. The Virginia lab was competing with another facility on Long Island, New York, Brookhaven National Laboratory, for funds to build a new electron-ion collider. On December 2019 the US Department of Energy selected the Brookhaven National Laboratory to host the electron-ion collider.
https://en.wikipedia.org/wiki?curid=12666
Foreign relations of Lithuania Lithuania is a country on the south-eastern shore of the Baltic Sea, a member of the United Nations Organisation, the Organisation for Security and Cooperation in Europe, the European Union, the North Atlantic Treaty Organisation, the World Trade Organisation. Currently, Lithuania maintains diplomatic relations with 182 states Lithuania became a member of the United Nations on 18 September 1991, and is a signatory to a number of its organizations and other international agreements. It is also a member of the Organization for Security and Cooperation in Europe, NATO and its adjunct North Atlantic Coordinating Council, the Council of Europe, and the European Union. Lithuania gained membership in the World Trade Organization on 31 May 2001. On 1 May 2004, Lithuania became one of the 28 Member States of the European Union. The EU activities affect different spheres of politics, from consumer rights to national defence matters. In the second half of 2013, Lithuania took presidency over the EU Council. Membership in the Union has strengthened the domestic economy, giving it access to the wide pan-European market. Foreign direct investments in Lithuania are growing. The country is poised to become energy-independent. The accession to the Schengen space in 2007 has opened up possibilities for the free movement of both citizens and goods across 25 European states. Lithuania's citizens enjoy equal social guarantees while working, travelling, or studying at the Community's countries. The country now benefits from additional EU fund and programme funding in the field of education and science. As an EU citizen, every citizen of Lithuania has the guarantee of consular assistance of EU representative offices in countries where Lithuania has none. On 29 March 2004, Lithuania became a member of the North Atlantic Treaty Organisation It is a defensive union based on political and military cooperation of sovereign states. Its members are committed to protecting freedom, guarding shared heritage and civilisation under the principles of democracy, individual freedom, and superiority of law. According to Article 5 of the agreement, all NATO states are obliged to defend one another. Lithuania entered into cooperation with NATO in 1991. Five years later, Lithuania launched its mission to the organisation, and in late 2002, Lithuania and six other states was invited to start negotiations over membership in the Alliance. Today Lithuania sees NATO as the key and most effective collective defence system, one that ensures the security of the state and stands to defer potential aggression, and employs every measure available to strengthen trans-Atlantic relations to contribute to the strengthening of the EU-U.S. relations. On 17 September 1991, Lithuania became a member of the United Nations. This global international organisation has 193 states for its members. The United Nations Charter anchors the goals of the organisation, which are to maintain international peace and security, stop aggression under the principles of justice and international law, regulate or resolve international disputes. Lithuania's interests at the United Nations are presented by the permanent mission of the Republic of Lithuania in New York, the permanent mission to the United Nations office and other international organisations in Geneva, and the permanent mission to international organisations in Vienna. In 2013, Lithuania was elected non-permanent member of the United Nations Security Council, and to the Security Council in February 014 and May 2015. In 2007, Lithuania presided over the Economic and Social Council of the United Nations. Currently, Lithuania is member to nearly 50 international cross-governmental organisations, joining many of them after it had regained its independence, and having its membership in inter-war organisations restored. In 2015, Lithuania was elected to the UNESCO Executive Council for a third time. Right now, Lithuania is a member of the International Telecommunications Union and the World Tourism Organisation. Lithuania is also actively involved in regional organisations. In 2001–2002, Lithuania took presidency over the EU Committee of Ministers, in 2012, over regional formats of the Baltic Sea such as the Baltic Council of Ministers and the NB8. The 2011 presidency over the Organisation for Security and Cooperation in Europe was also a success. Lithuania is also an active member in the cooperation between Northern Europe countries. Lithuania is a member of Baltic Council, since its establishment in 1993. Baltic Council is a permanent organisation of international cooperation, located in Tallinn. It operates through the Baltic Assembly and Baltic Council of Ministers. Lithuania also cooperates with Nordic and other two Baltic countries through NB8 cooperation format. The similar format, called NB6 unites Nordic and Baltic countries members of EU. The main goal of NB6 cooperation is to discuss and agree on positions before presenting them in the Council of the European Union and the meetings of the EU Foreign Affairs Ministers. The Council of the Baltic Sea States (CBSS) was established in 1992 in Copenhagen as an informal regional political forum, which main aim is to promote integration process and to affiliate close contacts between the countries of the region. The members of CBSS are Denmark, Estonia, Finland, Germany, Iceland, Latvia, Lithuania, Norway, Poland, Russia, Sweden and European Commission. The observer states are Belarus, France, Italy, Netherlands, Romania, Slovakia, Spain, United States, United Kingdom, Ukraine. The cooperation between the Nordic Council of Ministers and Lithuania is a political cooperation through which experience exchange contributes to realization of joint goals. One of its most important functions is to discover new trends and new possibilities for joint cooperation. The information office aims to represent Nordic concepts and demonstrate Nordic cooperation in Lithuania. Lithuania, together with other two Baltic countries, is also a member of Nordic Investment Bank (NIB) and cooperates in NORDPLUS programme committed to education. Baltic Development Forum (BDF) is an independent nonprofit organization which unites large companies, cities, business associations and institutions in the Baltic Sea region. In 2010 the 12th Summit of the BDF was held in Vilnius. Lithuania has been a trans-shipment point for opiates and other illicit drugs from Russia, Southwest Asia, Latin America, and Western Europe to Western Europe and Scandinavia. Lithuania is a signatory to 8 of the 12 International Conventions related to counter- terrorist activities The International Organization for Migration (IOM) reports that about 1,000 citizens of Lithuania fall victim to trafficking annually. Most are women between the ages of 21 and 30 who are sold into prostitution
https://en.wikipedia.org/wiki?curid=17828
History of Luxembourg The history of Luxembourg consists of the history of the country of Luxembourg and its geographical area. Although its recorded history can be traced back to Roman times, the history of Luxembourg proper is considered to begin in 963. Over the following five centuries, the powerful House of Luxembourg emerged, but its extinction put an end to the country's independence. After a brief period of Burgundian rule, the country passed to the Habsburgs in 1477. After the Eighty Years' War, Luxembourg became a part of the Southern Netherlands, which passed to the Austrian line of the Habsburg dynasty in 1713. After occupation by Revolutionary France, the 1815 Treaty of Paris transformed Luxembourg into a Grand Duchy in personal union with the Netherlands. The treaty also resulted in the second partitioning of Luxembourg, the first being in 1658 and a third in 1839. Although these treaties greatly reduced Luxembourg's territory, the latter established its formal independence, which was confirmed after the Luxembourg Crisis of 1867. In the following decades, Luxembourg fell further into Germany's sphere of influence, particularly after the creation of a separate ruling house in 1890. It was occupied by Germany from 1914 until 1918 and again from 1940 until 1944. Since the end of the Second World War, Luxembourg has become one of the world's richest countries, buoyed by a booming financial services sector, political stability, and European integration. In the territory now covered by the Grand Duchy of Luxembourg, there is evidence of primitive inhabitants dating back to the Paleolithic or Old Stone Age over 35,000 years ago. The oldest artefacts from this period are decorated bones found at Oetrange. However, the first real evidence of civilisation is from the Neolithic or 5th millennium BC, from which evidence of houses has been found. Traces have been found in the south of Luxembourg at Grevenmacher, Diekirch, Aspelt and Weiler-la-Tour. The dwellings were made of a combination of tree trunks for the basic structure, mud-clad wickerwork walls, and roofs of thatched reeds or straw. Pottery from this period has been found near Remerschen. While there is not much evidence of communities in Luxembourg at the beginning of the Bronze Age, a number of sites dating back to the period between the 13th and the 8th century BC provide evidence of dwellings and reveal artefacts such as pottery, knives and jewellery. The sites include Nospelt, Dalheim, Mompach and Remerschen. What is present-day Luxembourg, was inhabited by Celts during the Iron Age (from roughly 600 BC until 100 AD). The Gaulish tribe in what is present-day Luxembourg during and after the La Tène period was known as the Treveri; they reached the height of prosperity in the 1st century BC. The Treveri constructed a number of oppida, Iron Age fortified settlements, near the Moselle valley in what is now southern Luxembourg, western Germany and eastern France. Most of the archaeological evidence from this period has been discovered in tombs, many closely associated with Titelberg, a 50 ha site which reveals much about the dwellings and handicrafts of the period. The Romans, under Julius Caesar, completed their conquest and occupation in 53 BC. The first known reference to the territory of present-day Luxembourg was by Julius Caesar in his "Commentaries on the Gallic War". The Treveri were more co-operative with the Romans than most Gallic tribes, and adapted readily to Roman civilization. Two revolts in the 1st century AD did not permanently damage their cordial relations with Rome. The land of the Treveri was at first part of Gallia Celtica, but with the reform of Domitian in c. 90, was reassigned to Gallia Belgica. Gallia Belgica was infiltrated by the Germanic Franks from the 4th century, and was abandoned by Rome in AD 406. The territory of what would become Luxembourg by the 480s, became part of Merovingia Austrasia and eventually part of the core territory of the Carolingian Empire. With the Treaty of Verdun (843), it fell to Middle Francia, and in 855, to Lotharingia. With the latter's division in 959, it then fell to the Duchy of Upper Lorraine within the Holy Roman Empire. The history of Luxembourg properly began with the construction of Luxembourg Castle in the High Middle Ages. It was Siegfried I, count of Ardennes who traded some of his ancestral lands with the monks of the Abbey of St. Maximin in Trier in 963 for an ancient, supposedly Roman, fort named "Lucilinburhuc", commonly translated as "little castle". Modern historians link the etymology of the word with "Letze", meaning fortification, which may have referred to either the remains of a Roman watchtower or to a primitive refuge of the early Middle Ages. From the Early Middle Ages to the Renaissance, Luxembourg bore multiple names, depending on the author. These include "Lucilinburhuc", "Lutzburg", "Lützelburg", "Luccelemburc", and "Lichtburg", among others. The Luxembourgish dynasty produced several Holy Roman Emperors, Kings of Bohemia, and Archbishops of Trier and Mainz. Around the fort of Luxembourg, a town gradually developed, which became the centre of a small but important state of great strategic value to France, Germany and the Netherlands. Luxembourg's fortress, located on a rocky outcrop known as the Bock, was steadily enlarged and strengthened over the years by successive owners. Some of these included the Bourbons, Habsburgs and Hohenzollerns, who made it one of the strongest fortresses on the European continent, the Fortress of Luxembourg. Its formidable defences and strategic location caused it to become known as the ‘Gibraltar of the North’. In the 17th and 18th centuries, the electors of Brandenburg, later kings of Prussia (Borussia), advanced their claim to the Luxembourg patrimony as heirs-general to William of Thuringia and his wife Anna of Bohemia, the disputed dukes of Luxembourg in the 1460s. Anna was the eldest daughter of the last Luxembourg heiress. From 1609 onward, they had a territorial base in the vicinity, the Duchy of Cleves, the starting-point of the future Prussian Rhineland. This Brandenburger claim ultimately produced some results when some districts of Luxembourg were united with Prussia in 1813. The first Hohenzollern claimant to descend from both Anna and her younger sister Elisabeth, was John George, Elector of Brandenburg (1525–98), his maternal grandmother having been Barbara Jagiellon. In the late 18th century, the younger line of Orange-Nassau (the princes who held sway in the neighbouring Dutch oligarchy) also became related to the Brandenburgers. In 1598, the then possessor, Philip II of Spain, bequeathed Luxembourg and the other Low Countries to his daughter, the Infanta Isabella Clara Eugenia and her husband Albert VII, Archduke of Austria. Albert was an heir and descendant of Elisabeth of Austria (d. 1505), queen of Poland, the youngest granddaughter of Sigismund of Luxembourg, the Holy Roman Emperor. Thus, Luxembourg returned to the heirs of the old Luxembourg dynasty of the line of Elisabeth. The Low Countries were a separate political entity during the couple's reign. After Albert's childless death in 1621, Luxembourg passed to his great-nephew and heir Philip IV of Spain. Luxembourg was invaded by Louis XIV of France (husband of Maria Theresa, daughter of Philip IV) in 1684, an action that caused alarm among France's neighbors and resulted in the formation of the League of Augsburg in 1686. In the ensuing War of the Grand Alliance, France was forced to give up the duchy, which was returned to the Habsburgs by the Treaty of Ryswick in 1697. During this period of French rule, the defences of the fortress were strengthened by the famous siege engineer Vauban. The French king's great-grandson Louis (1710–74) was, from 1712, the first heir-general of Albert VII. Albert VII was a descendant of Anna of Bohemia and William of Thuringia, having that blood through his mother's Danish great-great-grandmother, but was not the heir-general of that line. Louis was the first real claimant of Luxembourg to descend from both sisters, the daughters of Elisabeth of Bohemia, the last Luxembourg empress. Habsburg rule was confirmed in 1715 by the Treaty of Utrecht, and Luxembourg was integrated into the Southern Netherlands. Emperor Joseph and his successor Emperor Charles VI were descendants of Spanish kings who were heirs of Albert VII. Joseph and Charles VI were also descendants of Anna of Bohemia and William of Thuringia, having that blood through their mother, although they were heirs-general of neither line. Charles was the first ruler of Luxembourg to descend from both sisters, daughters of Elisabeth of Bohemia. Austrian rulers were ready to exchange Luxembourg and other territories in the Low Countries. Their purpose was to round out and enlarge their power base, which in geographical terms was centred around Vienna. Thus, Bavarian candidate(s) emerged to take over the Duchy of Luxembourg, but this plan led to nothing permanent. Emperor Joseph II however, made a preliminary pact to make a neighbour of Luxembourg, Charles Theodore, Elector Palatine, as Duke of Luxembourg and king in the Low Countries, in exchange for his possessions in Bavaria and Franconia. However, this scheme was aborted by Prussia's opposition. Charles Theodore, who would have become Duke of Luxembourg, was genealogically a junior descendant of both Anna and Elisabeth, but the main heir of neither. During the War of the First Coalition, Luxembourg was conquered and annexed by Revolutionary France, becoming part of the "département" of the Forêts in 1795. The annexation was formalised at Campo Formio in 1797. In 1798, Luxembourgish peasants started a rebellion against the French but it was rapidly suppressed. This brief rebellion is called the Peasant's War. Luxembourg remained more or less under French rule until the defeat of Napoleon in 1815. When the French departed, the Allies installed a provisional administration. Luxembourg initially came under the "Generalgouvernement Mittelrhein" in mid-1814, and then from June 1814 under the "Generalgouvernement Nieder- und Mittelrhein" (General Government Lower and Middle Rhine). The Congress of Vienna of 1815, gave formal autonomy to Luxembourg. In 1813, the Prussians had already managed to wrest lands from Luxembourg, to strengthen the Prussian-possessed Duchy of Julich. The Bourbons of France held a strong claim to Luxembourg, whereas the Emperor Francis of Austria, on the other hand, had controlled the duchy until the revolutionary forces had joined it to the French republic. However, his Chancellor, Klemens von Metternich, was not enthusiastic about regaining Luxembourg and the Low Countries, as they were separated so far from the main body of the Austrian Empire. Prussia and the Netherlands, both claiming Luxembourg, made an exchange deal: Prussia received the Principality of Orange-Nassau, the ancestral Principality of the Prince of Orange in Central Germany; and the Prince of Orange in turn received Luxembourg. Luxembourg, somewhat diminished in size (as the medieval lands had been slightly reduced by the French and Prussian heirs), was augmented in another way through the elevation to the status of grand duchy and placed under the rule of William I of the Netherlands. This was the first time that the duchy had a monarch who had no claim to the inheritance of the medieval patrimony. However, Luxembourg's military value to Prussia prevented it from becoming a full part of the Dutch kingdom. The fortress, ancestral seat of the medieval Luxembourgers, was garrisoned by Prussian forces, following Napoleon's defeat, and Luxembourg became a member of the German Confederation with Prussia responsible for its defence, and a state under the suzerainty of the Netherlands at the same time. In July 1819, a contemporary from Britain visited Luxembourg — his journal offers some insights. Norwich Duff, writes of its city that "Luxembourg is considered one of the strongest fortifications in Europe, and … it appears so. It is situated in Holland (then as now used by English speakers as shorthand for the Netherlands) but by treaty is garrisoned by Prussians and 5,000 of their troops occupy it under a Prince of Hesse. The civil government is under the Dutch and the duties collected by them. The town is not very large but the streets are broader than [in] the French towns and clean and the houses are good... [I] got the cheapest of hot baths here at the principal house I ever had in my life: one franc." In 1820, Luxembourg made use of the metric system of measurement compulsory. Previously, the country had using local units such as the "malter" (which was equivalent to 191 litres). Much of the Luxembourgish population joined the Belgian revolution against Dutch rule. Except for the fortress and its immediate vicinity, Luxembourg was considered a province of the new Belgian state from 1830 to 1839. By the Treaty of London in 1839, the status of the grand duchy became fully sovereign and in personal union to the king of the Netherlands. In turn, the predominantly Oil-speaking geographically larger western part of the duchy was ceded to Belgium as the province de Luxembourg. This loss left the Grand Duchy of Luxembourg a predominantly German state, although French cultural influence remained strong. The loss of Belgian markets also caused painful economic problems for the state. Recognising this, the grand duke integrated it into the German "Zollverein" in 1842. Nevertheless, Luxembourg remained an underdeveloped agrarian country for most of the century. As a result of this, about one in five of the inhabitants emigrated to the United States between 1841 and 1891. In 1867, Luxembourg's independence was confirmed, after a turbulent period which even included a brief time of civil unrest against plans to annex Luxembourg to Belgium, Germany, or France. The crisis of 1867 almost resulted in war between France and Prussia over the status of Luxembourg, which had become free of German control when the German Confederation was abolished at the end of the Seven Weeks War in 1866. William III, king of the Netherlands, and sovereign of Luxembourg, was willing to sell the grand duchy to France's Emperor Napoleon III in order to retain Limbourg but backed out when Prussian chancellor, Otto von Bismarck, expressed opposition. The growing tension brought about a conference in London from March to May 1867 in which the British served as mediators between the two rivals. Bismarck manipulated public opinion, resulting in the denial of sale to France. The issue was resolved by the second Treaty of London which guaranteed the perpetual independence and neutrality of the state. The fortress walls were pulled down and the Prussian garrison was withdrawn. Famous visitors to Luxembourg in the 18th and 19th centuries included the German poet Johann Wolfgang von Goethe, the French writers Émile Zola and Victor Hugo, the composer Franz Liszt, and the English painter Joseph Mallord William Turner. Luxembourg remained a possession of the kings of the Netherlands until the death of William III in 1890, when the grand duchy passed to the House of Nassau-Weilburg due to the 1783 Nassau Family Pact, under which those territories of the Nassau family in the Holy Roman Empire at the time of the pact (Luxembourg and Nassau) were bound by semi-Salic law, which allowed inheritance by females or through the female line only upon extinction of male members of the dynasty. When William III died leaving only his daughter Wilhelmina as an heir, the crown of the Netherlands, not being bound by the family pact, passed to Wilhelmina. However, the crown of Luxembourg passed to a male of another branch of the House of Nassau: Adolphe, the dispossessed Duke of Nassau and head of the branch of Nassau-Weilburg. World War I affected Luxembourg at a time when the nation-building process was far from complete. The small grand duchy (about 260,000 inhabitants in 1914) opted for an ambiguous policy between 1914 and 1918. With the country occupied by German troops, the government, led by Paul Eyschen, chose to remain neutral. This strategy had been elaborated with the approval of Marie-Adélaïde, Grand Duchess of Luxembourg. Although continuity prevailed on the political level, the war caused social upheaval, which laid the foundation for the first trades union in Luxembourg. The end of the occupation in November 1918, squared with a time of uncertainty on the international and national levels. The victorious Allies disapproved of the choices made by the local élites, and some Belgian politicians even demanded the (re)integration of the country into a greater Belgium. Within Luxembourg, a strong minority asked for the creation of a republic. In the end, the grand duchy remained a monarchy but was led by a new head of state, Charlotte. In 1921, it entered into an economic and monetary union with Belgium. During most of the 20th century, however, Germany remained its most important economic partner. The introduction of universal suffrage for men and women favored the Rechtspartei (party of the Right) which played the dominant role in the government throughout the 20th century, with the exception of 1925–26 and 1974–79, when the two other important parties, the Liberal and the Social-Democratic parties, formed a coalition. The success of the resulting party was due partly to the support of the church — the population was more than 90 percent Catholic — and of its newspaper, the "Luxemburger Wort". On the international level, the interwar period was characterised by an attempt to put Luxembourg on the map. Especially under Joseph Bech, head of the Department of Foreign Affairs, the country participated more actively in several international organisations, in order to ensure its autonomy. On December 16, 1920, Luxembourg became a member of the League of Nations. On the economic level in the 1920s and the 1930s, the agricultural sector declined in favour of industry, but even more so for the service sector. The proportion of the active population in this last sector rose from 18 percent in 1907 to 31 percent in 1935. In the 1930s, the internal situation deteriorated, as Luxembourgish politics were influenced by European left- and right-wing politics. The government tried to counter communist-led unrest in the industrial areas and continued friendly policies towards Nazi Germany, which led to much criticism. The attempts to quell unrest peaked with the "Maulkuerfgesetz", the "muzzle" Law, which was an attempt to outlaw the Communist Party. The law was turned down in a 1937 referendum. Upon the outbreak of the Second World War in September 1939, the government of Luxembourg observed its neutrality and issued an official proclamation to that effect on September 6, 1939. On May 10, 1940, an invasion by German armed forces swept away the Luxembourgish government and monarchy into exile. The German troops made up of the 1st, 2nd, and 10th Panzer Divisions invaded at 04:35. They did not encounter any significant resistance save for some bridges destroyed and some land mines since the majority of the Luxembourgish Volunteer Corps stayed in their barracks. Luxembourgish police resisted the German troops, but to little avail and the capital city was occupied before noon. Total Luxembourgish casualties amounted to 75 police and soldiers captured, six police wounded, and one soldier wounded. The Luxembourg royal family and their entourage received visas from Aristides de Sousa Mendes in Bordeaux. They crossed into Portugal and subsequently travelled to the United States in two groups: on the from Lisbon to Baltimore in July 1940, and on the Pan American airliner" Yankee Clipper" in October 1940. Throughout the war, Grand Duchess Charlotte broadcast via the BBC to Luxembourg to give hope to the people. Luxembourg remained under German military occupation until August 1942, when the Third Reich formally annexed it as part of the "Gau" "Moselland". The German authorities declared Luxembourgers to be German citizens and called up 13,000 for military service. 2,848 Luxembourgers eventually died fighting in the German army. Luxembourgish opposition to this annexation took the form of passive resistance at first, as in the "Spéngelskrich" (lit. "War of the Pins"), and in refusal to speak German. As French was forbidden, many Luxembourgers resorted to resuscitating old Luxembourgish words, which led to a renaissance of the language. The Germans met opposition with deportation, forced labour, forced conscription and, more drastically, with internment, deportation to Nazi concentration camps and execution. Executions took place after the so-called general strike from September 1 to September 3, 1942, which paralysed the administration, agriculture, industry and education in response to the declaration of forced conscription by the German administration on August 30, 1942. The Germans suppressed the strike violently. They executed 21 strikers and deported hundreds more to Nazi concentration camps. The then civilian administrator of Luxembourg, Gauleiter Gustav Simon, had declared conscription necessary to support the German war effort. The general strike in Luxembourg remained one of the few mass strikes against the German war machine in Western Europe. U.S. forces liberated most of the country in September 1944. They entered the capital city on September 10, 1944. During the Ardennes Offensive (Battle of the Bulge) German troops took back most of northern Luxembourg for a few weeks. Allied forces finally expelled the Germans in January 1945. Between December 1944 and February 1945, the recently liberated city of Luxembourg was designated by the OB West (German Army Command in the West) as the target for V-3 siege guns, which were originally intended to bombard London. Two V-3 guns based at Lampaden fired a total of 183 rounds at Luxembourg. However, the V-3 was not very accurate. 142 rounds landed in Luxembourg, with 44 confirmed hits in the urban area, and the total casualties were 10 dead and 35 wounded. The bombardments ended with the American Army nearing Lampaden on February 22, 1945. Altogether, of a pre-war population of 293,000, 5,259 Luxembourgers lost their lives during the hostilities. After World War II, Luxembourg abandoned its politics of neutrality, when it became a founding member of the North Atlantic Treaty Organization (NATO) and the United Nations. It is a signatory of the Treaty of Rome, and constituted a monetary union with Belgium (Benelux Customs Union in 1948), and an economic union with Belgium and the Netherlands, the so-called BeNeLux. Between 1945 and 2005, the economic structure of Luxembourg changed significantly. The crisis of the metallurgy sector, which began in the mid-1970s and lasted till the late 1980s, nearly pushed the country into economic recession, given the monolithic dominance of that sector. The Tripartite Coordination Committee, consisting of members of the government, management representatives, and trade union leaders, succeeded in preventing major social unrest during those years, thus creating the myth of a “Luxembourg model” characterised by social peace. Although in the early years of the 21st century Luxembourg enjoyed one of the highest GNP per capita in the world, this was mainly due to the strength of its financial sector, which gained importance at the end of the 1960s. Thirty-five years later, one-third of the tax proceeds originated from that sector. The harmonisation of the tax system across Europe could, however, seriously undermine the financial situation of the grand duchy. Luxembourg has been one of the strongest advocates of the European Union in the tradition of Robert Schuman. It was one of the six founding members of the European Coal and Steel Community (ECSC) in 1952 and of the European Economic Community (EEC) (later the European Union) in 1957; in 1999 it joined the euro currency area. Encouraged by the contacts established with the Dutch and Belgian governments in exile, Luxembourg pursued a policy of presence in international organisations. In the context of the Cold War, Luxembourg clearly opted for the West having joined NATO in 1949. Engagement in European reconstruction was rarely questioned subsequently, either by politicians or by the greater population. Despite its small proportions, Luxembourg often played an intermediary role between larger countries. This role of mediator, especially between the two large and often bellicose nations of Germany and France, was considered one of the main characteristics of its national identity, allowing the Luxembourgers not to have to choose between one of these two neighbours. The country also hosted a large number of European institutions such as the European Court of Justice. Luxembourg's small size no longer seemed to be a challenge to the existence of the country, and the creation of the Banque Centrale du Luxembourg (1998) and of the University of Luxembourg (2003) was evidence of the continuing desire to become a “real” nation. The decision in 1985 to declare Lëtzebuergesch (Luxembourgish) the national language was also a step in the affirmation of the country's independence. In fact, the linguistic situation in Luxembourg was characterised by trilingualism: Lëtzebuergesch was the spoken vernacular language, German the written language, in which Luxembourgers were most fluent, and French the language of official letters and law. In 1985, the country became victim to a mysterious bombing spree, which was targeted mostly at electrical masts and other installations. In 1995, Luxembourg provided the president of the European Commission, former Prime Minister Jacques Santer, who later had to resign over corruption accusations against other commission members. Prime Minister, Jean-Claude Juncker, followed this European tradition. On 10 September 2004, Mr Juncker became the semi-permanent president of the group of finance ministers from the 12 countries that share the euro, a role that led him to be dubbed "Mr Euro". The present sovereign is Grand Duke Henri. Henri's father, Jean, succeeded his mother, Charlotte, on 12 November 1964. Jean's eldest son, Prince Henri, was appointed "Lieutenant Représentant" (Hereditary Grand Duke) on 4 March 1998. On 24 December 1999, Prime Minister Juncker announced Grand Duke Jean's decision to abdicate the throne on 7 October 2000, in favour of Prince Henri who assumed the title and constitutional duties of Grand Duke. On July 10, 2005, after threats of resignation by Prime Minister Juncker, the proposed European Constitution was approved by 56.52% of voters. General:
https://en.wikipedia.org/wiki?curid=17830
Geography of Luxembourg Luxembourg is a small country located in the Low Countries, part of North-West Europe It borders Belgium for to the west and north, France () to the south, and Germany () to the east. Luxembourg is landlocked, separated from the North Sea by Belgium. The topography of the country is divided very clearly between the hilly Oesling of the northern third of the Grand Duchy and the flat Gutland, which occupies the southern two-thirds. The country's longest river is the Sauer, which is a tributary of the Moselle, the basin of which includes almost all of Luxembourg's area. Other major rivers include the Alzette in the south and the Wiltz in the north. The capital, and by far the largest city, is Luxembourg City, which is located in the Gutland, as are most of the country's main population centres, including Esch-sur-Alzette, Dudelange, and Differdange. Besides Luxembourg City, the other main towns are primarily located in the southern Red Lands region, which lines the border between Luxembourg and France to the south. Despite its small size, Luxembourg has a varied topography, with two main features to its landscape. The northern section of the country is formed by part of the plateau of the Ardennes, where the mountain heights range from . The rest of the country is made up of undulating countryside with broad valleys. The capital, Luxembourg City, is located in the southern part of the country. The most prominent landmark, the high plateau of the Ardennes in the north. At its highest point, it reaches a height of . Commonly known as the Oesling, the Ardennes region covers , about 32% of the entire country. Rugged scenery predominates because river erosion over thousands of years has left a varied, low mountain landscape, densely covered with vegetation, sometimes with considerable variations in height. These differences in relief, together with stretches of water interspersed with forests, fields, and pastures are the main features that make the landscape so distinctive. Typical of this high area, however, is the infertile soil and poor drainage resulting in numerous peat bogs, which were once exploited as fuel. These factors, combined with heavy rainfall and frost, made this an inhospitable environment for the first settlers. Even today, the living conditions in such an environment are not particularly favourable. Nevertheless, some 7,800 people make a living of the land through either forestry, small-scale farming, or environment work. Because the soil is so difficult to cultivate, most of the land is used for cattle pasture. The Ardennes region also includes the Upper Sûre National Park, an important conservation area and a hiker's retreat. South of the Sûre River, the country is known as the Gutland. The region covers slightly over two-thirds of the country. The terrain gently rises and falls with an average height of . Agriculture is the main activity as term Gutland arises from the fertile soil and warm, dry summers experienced is this part of the Duchy compared to the Oesling region. As a result, vegetables and fruit, such as strawberries, apples, plums, and cherries, are grown in large quantities. River erosion in this area has created deep gorges and caves, resulting in some spectacular scenery. In the extreme south of the country lies "the land of the red rocks" – a reference to the deposits of minerals found here. Rich in iron ore, the district has been a mining and heavy industrial region since Roman if not earlier times and stretches for over . The tall chimneys of the iron and steel works are typical landmarks of the industrial south. To the east lies the grape-growing valley of the Moselle. Numerous villages nestle in the deep valleys and behind the vineyards along the river banks. Most villages have at least one winery. Also in the east is the "Little Switzerland" area, characterized by wooded glens and ravines in unusual rock formations. Luxembourg has a number of minor rivers, such as the Eisch, the Alzette, and the Pétrusse, but the main river is the Moselle with its tributaries-the Sûre and the Our. Together, their courses serve as a natural boundary between Luxembourg and Germany. Along their banks, many of the country's medieval castles can be found. The Moselle actually rises in northeast France and flows north through Luxembourg for to join the mighty Rhine at Koblenz, Germany. The Moselle is long, and is navigable, due to canalization for . Green slopes, covered with vines, flank the meandering course of the river. Rising in Belgium, the Sûre River flows for in an easterly direction through Luxembourg and into the Moselle. Its sinuous course essentially cuts Luxembourg from east to west. The Our River, flowing along the northeastern border, is a tributary of the Sûre. Its valley is surrounded by unspoiled countryside. The Upper Sûre lake is the largest stretch of water in the Grand Duchy. Surrounded by luxuriant vegetation and peaceful creeks, the lake is a centre for water sports, such as sailing, canoeing, and kayaking. Such outdoor activities, which has made it an attractive spot for tourists, have led to the growth of a local crafts industry. The town of Esch-sur-Sûre nestles at one end of the lake. Immediately above it, the river has been dammed to form a hydroelectric reservoir extending some up the valley. The Upper Sûre dam was built in the 1960s to meet the country's drinking water requirements. Elevation extremes: "lowest point:" Moselle in Wasserbillig 133 m "highest point:" Kneiff in Troisvierges 560 m Luxembourg is part of the West European Continental climatic region, and enjoys a temperate climate without extremes. Winters are mild, summers fairly cool, and rainfall is high. Seasonal weather is somewhat different between the northern and southern regions. In the north there is considerable influence from the Atlantic systems, in which the passage of frequent pressure depressions gives rise to unstable weather conditions. This results in overcast skies and considerable drizzle in the winter. Rainfall reaches a year in some areas. In the summer, excessive heat is rare and temperatures drop noticeably at night. Low temperatures and humidity make for what those living in this part of the country call, optimistically, an "invigorating climate". In the south, although the rainfall is not significantly low, at around , and the winters no milder, the principal difference is in the higher summer temperatures, especially in the Moselle Valley. Crops, especially wine grapes, thrive here. With a mean annual temperature of , the sunniest months are May to August. In the spring, the countryside is a riot of wildflowers and blossoms. Luxembourg's flora is characterized by the country's location at the border between the Atlantic-European and Central-European climate zones. In the north, beech and oak trees are plentiful. The oak trees can grow up to , with a diameter of . They supply large quantities of excellent hardwood timber because of their strength. Along the riverbanks, species like the black alder and willows can be found. Alder wood is pale yellow to reddish brown, fine-textured and durable even under water. It is also an important timber tree mainly because of its disease-resistant properties. Willow trees can reach a height of , and are valued for ornamental purposes. The narrow, deeply incised valleys of the north also provide a habitat for rare plants and animals, especially the European otter, a protected species. In the industrial south, among the abandoned quarries and deserted open pit mines, nature has reclaimed her own, and there are flowers everywhere. "Party to:" Air Pollution, Air Pollution-Nitrogen Oxides, Air Pollution-Sulphur 85, Air Pollution-Sulphur 94, Air Pollution-Volatile Organic Compounds, Biodiversity, Climate Change, Desertification, Endangered Species, Hazardous Wastes, Marine Dumping, Nuclear Test Ban, Ozone Layer Protection, Ship Pollution, Tropical Timber 83, Tropical Timber 94, Wetlands "Signed, but not ratified:" Air Pollution-Persistent Organic Pollutants, Climate Change-Kyoto Protocol, Environmental Modification, Law of the Sea Geographic coordinates: Area: "total:" 2 586 km2 "land:" 2 586 km2 "water:" 0 km2 Natural resources: iron ore (no longer exploited), arable land Land use: "arable land:" 23.9% "permanent crops:" 0.56% "other:" 75.52% (2011) Irrigated land: 10 km2 (including Belgium) (1993 est.) Total renewable water resources: 3.1 km3
https://en.wikipedia.org/wiki?curid=17831
Demographics of Luxembourg This article is about the demographic features of the population of Luxembourg, including population density, education level, health of the populace, economic status, religious affiliations and other aspects of the population. The following is an overview of the demographics of Luxembourg. Demographic topics include basic statistics, most populous cities, and religious affiliation. The population of Luxembourg as of 1 January 2020 was 626,108 (52.5% Luxembourgers and 47.4% of foreign nationality). The people of Luxembourg are called Luxembourgers. The following table chronicles factors such as population, birth rates, and death rates in Luxembourg since 1900. Source: "UN World Population Prospects" The foreign population resident in Luxembourg currently numbers over 296,465, corresponding to 47.4% of the total population (compared to 13.2% in 1961). That means there are currently almost as many immigrants as there are native citizens. These immigrants are overwhelmingly nationals of EU countries (accounting for over 80%), by far the greater part of whom originally come from Portugal, Italy and the two neighbouring countries, France and Belgium. For some years, there has also been a large increase in the number of immigrants and asylum seekers from the countries of Eastern Europe, and especially the new republics to have emerged from the former Yugoslavia (Bosnia and Herzegovina, Serbia and Montenegro). These immigrants include a considerable proportion of young people. Immigrants (especially asylum seekers) have a strong impact on the birth rate, accounting for nearly 50% of births in Luxembourg. A more detailed breakdown by nationality shows that the Portuguese community is still the largest group, accounting for almost a third of the foreign population. The Italian population has been stable for the past ten years at approximately 20,000. Some 80,000 foreigners come from bordering countries (France, Belgium and Germany). As of 2020 the population of Russian nationals in Luxembourg is 1,857. Luxembourg Russian Saturday school, Kalinka, serves students ages 3–12 and includes Russian language and cultural classes. In 2014, there were 160 students and 22 teachers in the school. The Japanese Supplementary School in Luxembourg (ルクセンブルグ補習授業校 "Rukusenburugu Hoshū Jugyō Kō") is a Japanese supplementary school operated in the country, serving students ages 6–15. It is held at the International School Luxembourg and as of 2014 has about 60 students. Its operations at the ISL began in 1991. The predominant religion of the Luxembourg population is Roman Catholic, with Protestant, Anglican, Jewish and Muslim minorities. According to a 1979 law, the government forbids collection of data on religious practices, but over 90% is estimated to be baptized Catholic (the Virgin Mary is the Patroness of the city of Luxembourg). The Lutherans are the largest Protestant denomination in the country. Muslims are estimated to number approximately 6000 persons, notably including 1,500 refugees from Montenegro; Orthodox (Albanian, Greek, Serbian, Russian, and Romanian) adherents are estimated to number approximately 5,000 persons, along with approximately 1,000 Jews. Freedom of religion is provided by the Luxembourg Constitution. The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. 24.64 migrant(s)/1,000 population (2007 est.) 4.68 deaths/1,000 live births (2007 est.) 1.7 children born/woman (2002 est.) A plethora of languages are spoken in Luxembourg, but the three primary languages are Luxembourgish (national and administrative language), German (administrative language), and French (administrative language). The following table lists the percentage of citizens of Luxembourg who are able to speak a native language, or two or more languages. Additionally, people born in foreign countries and temporary guest workers make up more than a third (40%) of the population of Luxembourg. Although most of these foreign born people primarily speak German and French, a significant minority also are native Portuguese, Italian, and English.
https://en.wikipedia.org/wiki?curid=17832
Politics of Luxembourg The politics of Luxembourg takes place in a framework of a parliamentary representative democratic monarchy, whereby the Prime Minister of Luxembourg is the head of government, and the multi-party system. Executive power is under the constitution of 1868, as amended, exercised by the government, by the Grand Duke and the Council of Government (cabinet), which consists of a prime minister and several other ministers. Usually, the prime minister is the leader of the political party or coalition of parties having the most seats in parliament. Legislative power is vested in both the government and parliament. The judiciary is independent of the executive and the legislature. Legislative power is vested in the Chamber of Deputies, elected directly to five-year terms. Since the end of World War II, the Christian Social People's Party (CSV) has been the senior partner in all governing coalitions with two exceptions: 1974-79 (DP-LSAP coalition) and since 2013 (DP-LSAP-Green coalition). The Catholic-oriented CSV resembles Christian democratic political parties in other West European nations, and enjoys broad popular support making it the strongest party in the country and the second strongest in those regions where it is not the "number one". The Luxembourg Socialist Workers' Party (LSAP) is a party of social-democratic orientation which has been a junior partner in most governments since 1974 either with the CSV in 1984–1999 and 2004-2013, or the Democratic Party in 1974-1979. Its stronghold lies in the industrial belt in the south of the country (the "Minette Region", which is mainly the canton of Esch). The Democratic Party (DP) is a liberal party, drawing support from self-employed persons, the business community and the urban upper-middle class. Like other West European liberal parties, it advocates a mixture of basic social legislation and minimum government involvement in the economy. It is strongly pro-NATO and defends the idea of a secular state in which religion should not play any role in public life. The DP had been a junior partner in coalition governments with the CSV in 1979–1984 and 1999–2004, and senior partner in a coalition government with the LSAP in 1974-1979. The traditional stronghold of the party is the City of Luxembourg, the "Buergermeeschter" (Mayor) of the nation's capital coming usually from the ranks of the DP. The Communist Party (KPL), which received 10%-18% of the vote in national elections from World War II to the 1960s, won only two seats in the 1984 elections, one in 1989, and none in 1994. Its small remaining support lies in the heavily industrialized south. The Greens has received growing support since it was officially formed in 1983. It opposes both nuclear weapons and nuclear power and supports environmental and ecological preservation measures. This party generally opposes Luxembourg's military policies, including its membership in the North Atlantic Treaty Organization. In the June 2004 parliamentary elections, the CSV won 24 seats, the LSAP 14, the DP 10, the Greens 7, and the Alternative Democratic Reform Party 5. The Left and the Communist Party each lost its single seat in part due to their separate campaigns. The Democratic Party which had become the junior coalition partner in 1999 registered heavy losses. The long-reigning CSV was the main winner, partly due to the personal popularity of the prime minister Jean-Claude Juncker (CSV). In July 2004, it chose the LSAP as its coalition partner. Jean Asselborn (LSAP) was appointed as the Vice Prime Minister and Minister of Foreign Affairs and Immigration. In 2013, the CSV lost one seat (23 seats instead of 24). A complete list of all governments is maintained on the website of the Government of Luxembourg. In 2008, the bitter controversy over euthanasia had parliament pass a measure which would restrict the legislative veto powers of the Grand Duke, who had opposed the pro-euthanasia law on the grounds of his personal moral standards based on the Christian faith, a problem of private conscience very similar to what had occurred in Belgium in the early 1990s when King Baudouin expressed his opposition to a law liberalizing abortion. Luxembourg has a parliamentary form of government with a constitutional monarchy operating according to absolute primogeniture. According to the constitution of 1868, executive power is exercised by the Grand Duke or Grand Duchess and the cabinet, which consists of a Prime Minister and a variable number of government branch ministers. The Grand Duke has the power to dissolve the legislature and reinstate a new one. However, since 1919, sovereignty has resided with the nation. The monarchy is hereditary within the ruling dynasty of "Luxembourg-Nassau". The prime minister and vice prime minister are appointed by the monarch, following popular elections to the Chamber of Deputies. All government members are responsible to the Chamber of Deputies. The government is currently a coalition of the DP, LSAP, and Green Party. The Chamber of Deputies (Luxembourgish: "D'Chamber"; French: "Chambre des Députés") has 60 members, elected for a five-year term by proportional representation in four multi-seat constituencies. The Council of State (Luxembourgish: "Staatsrot"; French: "Conseil d'État") is an advisory body composed of 21 citizens (usually politicians or senior public servants with good political ties) proposed by the Council of Government and appointed by the Grand Duke. Traditionally the heir of the throne is also one of its members. Its role is to advise the Chamber of Deputies in the drafting of legislation. The function of the councilor ends after a continuous or discontinuous period of fifteen years or when the relevant person reaches the age of seventy-two. The responsibilities of the members of the Council of State are extracurricular to their normal professional duties. Luxembourgish law is based upon the Code Napoléon with numerous updates, modernization, and modifications. The apex of the judicial system is the Superior Court of Justice (Luxembourgish: "Iewechte Geriichtshaff"; French: "Cour Supérieure de Justice"), whose judges are appointed by the Grand Duke for life. The same goes for the Administrative Court (Luxembourgish: "Verwaltungsgeriicht"; French: "Cour Administrative"). The Grand Duchy is divided into twelve cantons: These are part of: Luxembourg's contribution to its defense and to NATO consists of a small but well-equipped army of volunteers of Luxembourgish and foreign nationality. Its operational headquarters are at the Haerebierg Military Center in Diekirch. Being a landlocked country, it has no navy. It also has no air force. According to an agreement with neighboring Belgium, its airspace is protected by the Belgian Air Force. Also, 18 NATO Airborne Warning And Control System (AWACS) airplanes are registered as aircraft of Luxembourg based on a decision of the NATO Authorities. Luxembourg is a member of ACCT, Australia Group, Benelux, CE, EAPC, EBRD, ECE, EIB, EMU, EU, FAO, IAEA, IBRD, ICAO, ICCt, ICC, ICRM, IDA, IEA, IFAD, IFC, IFRCS, ILO, IMF, IMO, Intelsat, Interpol, IOC, IOM, ISO, ITU, ITUC, NATO, NEA, NSG, OECD, OPCW, OSCE, PCA, UN, UNCTAD, UNESCO, UNIDO, UPU, WCO, WEU, WHO, WIPO, WMO, WTrO, Zangger Committee
https://en.wikipedia.org/wiki?curid=17833
Economy of Luxembourg The economy of Luxembourg is largely dependent on the banking, steel, and industrial sectors. Luxembourgers enjoy the highest per capita gross domestic product in the world (CIA 2018 est.). Luxembourg is seen as a diversified industrialized nation, contrasting the oil boom in Qatar, the major monetary source of the southwest Asian state. Although Luxembourg in tourist literature is aptly called the "Green Heart of Europe", its pastoral land coexists with a highly industrialized and export-intensive area. Luxembourg's economy is quite similar to Germany's. Luxembourg enjoys a degree of economic prosperity very rare among industrialized democracies. In 2009, a budget deficit of 5% resulted from government measures to stimulate the economy, especially the banking sector, as a result of the world economic crisis. This was however reduced to 1.4% in 2010. For 2017 the (expected) figures are as follows: Growth 4.6%; Inflation 1.0%; Budget deficit 1.7%, to be reduced to 0.8% in 2020; Debt: 20.4%, no new debts to be taken in the fiscal year. In 2013 the GDP was $60.54 billion of which services, including the financial sector, produced 86%. The financial sector comprised 36% of GDP, industry comprised 13.3% and agriculture only 0.3%. Banking is the largest sector in the Luxembourg economy. In the 2019 Global Financial Centres Index, Luxembourg was ranked as having the 25th most competitive financial center in the world, and third most competitive in Europe after London and Zürich. The country has specialised in the cross-border fund administration business. As Luxembourg's domestic market is relatively small, the country's financial centre is predominantly international. At the end of March 2009, there were 152 banks in Luxembourg, with over 27,000 employees. Political stability, good communications, easy access to other European centres, skilled multilingual staff, a tradition of banking secrecy and cross-border financial expertise have all contributed to the growth of the financial sector. These factors have contributed to a Corruption Perceptions Index of 8.3 and a DAW Index ranking of 10 in 2012; the latter the highest in Europe. Germany accounts for the largest-single grouping of banks, with Scandinavian, Japanese, and major US banks also heavily represented. Total assets exceeded €929 billion at the end of 2008. More than 9,000 holding companies are established in Luxembourg. The European Investment Bank—the financial institution of the European Union—is also located there. Concern about Luxembourg's banking secrecy laws, and its reputation as a tax haven, led in April 2009 to it being added to a "grey list" of nations with questionable banking arrangements by the G20, a list from which it was removed in 2009. This concern has led Luxembourg to modify its tax legislation to avoid conflict with the tax authorities of European Union Members. For example, the classic tax exempt 1929 Holding Company was outlawed 31 December 2010, as it was deemed an illegal state aid by the European Commission. A key event in the economic history of Luxembourg was the 1876 introduction of English metallurgy. The refining process led to the development of the steel industry in Luxembourg and founding of the Arbed company in 1911. The restructuring of the industry and increasing government ownership in Arbed (31%) began as early as 1974. As a result of timely modernization of facilities, cutbacks in production and employment, government assumption of portions of Arbed's debt, and recent cyclical recovery of the international demand for steel, the company is again profitable. Its productivity is among the highest in the world. US markets account for about 6% of Arbed's output. The company specializes in production of large architectural steel beams and specialized value-added products. There has been, however, a relative decline in the steel sector, offset by Luxembourg's emergence as a financial center. In 2001, through the merger with Aceralia and Usinor, Arbed became Arcelor. Arcelor was taken over in 2006 by Mittal Steel to form Arcelor-Mittal, helmed by Lakshmi Mittal, the largest steel producer in the world. Government policies promote the development of Luxembourg as an audiovisual and communications center. Radio-Television-Luxembourg is Europe's premier private radio and television broadcaster. The government-backed Luxembourg satellite company "Société européenne des satellites" (SES) was created in 1986 to install and operate a satellite telecommunications system for transmission of television programs throughout Europe. The first SES Astra satellite, the 16-channel RCA 4000 Astra 1A, was launched by the Ariane Rocket in December 1988. SES presently constitutes the world largest satellite services company in terms of revenue. Tourism is an important component of the national economy, representing about 8.3% of GDP in 2009 and employing some 25,000 people or 11.7% of the working population. Despite the current crisis, the Grand Duchy still welcomes over 900,000 visitors a year who spend an average of 2.5 nights in hotels, hostels or on camping sites. Business travel is flourishing representing 44% of overnight stays in the country and 60% in the capital, up 11% and 25% between 2009 and 2010. Luxembourg's small but productive agricultural sector is highly subsidized, mainly by the EU and the government. It employs about 1–3% of the work force. Most farmers are engaged in dairy and meat production. Vineyards in the Moselle Valley annually produce about 15 million litres of dry white wine, most of which is consumed within Luxembourg and also in Germany, France, and Belgium on a lesser scale. The following table shows the main economic indicators in 1980–2017. Inflation under 2% is in green. Establishing accounts depends on the size of companies, and referring to three criteria: total of the balance sheet (total of assets without losses of the accounting year), the net amount of the turnover (net, such as it appears on the profit and loss account) and the average number of the workforce. The control of medium and big companies must be made by one or several independent auditors of companies, appointed by the general assembly among the members of the Institute of Independent Auditors of Companies. The control of small companies must be made by an accountant appointed by the general assembly for definite duration. The conclusion of the independent auditor’s report can be: The accountants" associations have difficulties getting organized because of the importance of the State in the accounting system. Labour relations have been peaceful since the 1930s. Most industrial workers are organized by unions linked to one of the major political parties. Representatives of business, unions, and government participate in the conduct of major labour negotiations. Foreign investors often cite Luxembourg's labour relations as a primary reason for locating in the Grand Duchy. Unemployment in 1999 averaged less than 2.8% of the workforce, but reached 4.4% by 2007. In 1978, Luxembourg tried to build a 1,200 MW nuclear reactor but dropped the plans after threats of major protests. Currently, Luxembourg uses imported oil and natural gas for the majority of its energy generation. Luxembourg is a member of the European Space Agency where Luxembourg contributed 23 million Euros in 2015. The world's biggest satellite operator (SES Global) has its origin and headquarters in Betzdorf, Luxembourg. In February 2016, the Government of Luxembourg announced that it would attempt to "jump-start an industrial sector to mine asteroid resources in space" by, among other things, creating a "legal framework" and regulatory incentives for companies involved in the industry. By June 2016, announced that it would "invest more than in research, technology demonstration, and in the direct purchase of equity in companies relocating to Luxembourg." By April 2017, three space mining corporations had established headquarters established in Luxembourg. Luxembourg's new law took effect in August 2017, ensuring that private operators can be confident about their rights on resources they extract in space. The law provides that space resources can be owned by anyone, not just by Luxembourg citizens or companies." Luxembourg has efficient road, rail and air transport facilities and services. The road network has been significantly modernised in recent years with 147 km of motorways connecting the capital to adjacent countries. The advent of the high-speed TGV link to Paris has led to renovation of the city's railway station while a new passenger terminal at Luxembourg Airport has recently been opened. There are plans to introduce trams (first core line operative in end 2017) in the capital and light-rail lines in adjacent areas within the next few years. The airport has known a sustained growth in passenger numbers during the last years ( 2015: 2.7 mio, 2020 : 4 mio expected), and the second stage of expansion is on its way. According to a note from the Luxembourg statistical agency, the Luxembourg economy was set to grow 4.0% in 2011. The economic situation was particularly dynamic in late 2010 and early 2011 but there were signs of a slowdown, both in the international economic environment and in terms of national indicators. GDP growth was set to enter a recession in 2012.
https://en.wikipedia.org/wiki?curid=17834
Transport in Luxembourg Transport in Luxembourg is ensured principally by road, rail and air. There are also services along the River Moselle which forms the border with Germany. The road network has been significantly modernised in recent years with motorways to adjacent countries. The advent of the high-speed TGV link to Paris has led to renovation of the city's railway station while a new passenger terminal at Luxembourg Airport has recently been opened. Trams in the capital were introduced in December 2017 and there are plans for light-rail lines in adjacent areas. All public transport in Luxembourg (buses, trams, and trains) has been free to use since 29 February 2020. Operated by Chemins de Fer Luxembourgeois (CFL), Luxembourg's railways form the backbone of the country's public transport network, linking the most important towns. The total length of operational (standard gauge) track is 274 km, though it was some 550 km at the end of the Second World War. There are regular services from Luxembourg City to Ettelbruck, Esch-sur-Alzette, Wasserbillig and Kleinbettingen while international routes extend to Trier, Koblenz, Brussels, Liège, Metz and Nancy. The railway network links into Belgium, Germany and France. Some of the cross-border services are run by CFL, others by SNCF, NMBS/SNCB and DB. There is now a frequent high-speed connection to Paris via the LGV Est line. EuroCap-Rail is a proposed high-speed axis connecting Brussels, Luxembourg (city), and Strasbourg. The six Luxembourg motorways cover a total distance of 152 km, linking the capital with Trier (Germany), Thionville (France) and Arlon (Belgium) as well as with Esch-sur-Alzette and Ettelbruck in Luxembourg. Luxembourg's motorways are toll free. The speed limit is normally 130 km/h, 110 km/h in rainy weather. With 56.8 km of motorway per 1,000 km2, Luxembourg probably now has the highest density of motorways in Europe. Luxembourg City is a major business and financial center. Many workers prefer to live in the three neighboring countries and drive to work each day. This creates huge traffic jams during peak commuting hours. Tailbacks on the E411 motorway can extend five or more kilometers into Belgium and can take an hour or more to navigate. The remaining road network in Luxembourg accounts for a total length of 2,820 km, consisting of 798 km of trunk roads (RN or "routes nationales") and 2,022 km of secondary roads (CR or "chemins repris"). Comprehensive bus services linking the towns and villages of the Grand Duchy of Luxembourg are contracted out to private operators by the RGTR (Régime général des transports routiers) under the Ministry of Transport. Luxembourg City is served by 163 of its own AVL (Autobus de la Ville de Luxembourg) buses transporting some 28 million passengers per year (2007). As with the RGTR, AVL contracts out to private operators for a number of services. Most of these buses are in AVL colors but the owner's name is often mentioned on them in small print. Also, the letters on the license plate can give ownership away to those that know how that system works. There are 25 regular bus routes plus special bus services through the night. The TICE or Syndicat des Tramways Intercommunaux dans le Canton d’Esch/Alzette operates several bus routes. They are centered on the city of Esch-sur-Alzette in the southeast of the country. Most are urban and suburban routes but some extend into the surrounding countryside. CFL, the Luxembourg railway company, operates some 17 bus routes, mainly serving towns and villages that are no longer served by rail. A number of smaller cities like Ettelbruck and Wiltz have started their own local services, some of which reach out to nearby villages. These services are not part of the RGTR and national tickets are not always honored. All transport companies work together under the "Verkéiersverbond", which covers all bus, tram, and train routes. Starting from 29th Feb 2020, all public transport was made free throughout the territory of the Grand Duchy of Luxembourg, funded through general taxation. However first class tickets can still be purchased for use on the trains: a ticket valid for 2 hours is €3, whilst a one-day ticket is €6. Luxembourg's historic tramway network closed in 1964 but the city reintroduced trams at the end of 2017. The phased approach will initially see trams running through the Kirchberg district to the Grand Duchess Charlotte Bridge, before the line is eventually extended to Luxembourg railway station and the Cloche d'Or business district in the South, and Luxembourg Airport in the North. Service started on a part of the route on 10 December 2017. On the same day a new funicular line opened between the Grand Duchess Charlotte Bridge, commonly called the Pont Rouge or Red Bridge and a new station on a CFL rail line located in a valley below. The full line will enter full operational service in 2021. The River Moselle forms a 42 km natural border between Luxembourg and Germany in the southeast of the country. In the summer months, the Princess Marie-Astrid and a few other tourist boats operate regular services along the river. Mertert near Grevenmacher on the Moselle is Luxembourg's only commercial port. With two quays covering a total length of 1.6 km, it offers facilities connecting river, road and rail transport. It is used principally for coal, steel, oil, agricultural goods and building materials. In 2016, the port handled 1.2 million tonnes of cargo. Luxembourg Airport at Findel, some 6 km to the north of the city, is Luxembourg's only commercial airport. Thanks to its long runway (4,000 m), even the largest types of aircraft are able to use its facilities. Luxair, Luxembourg's international airline, and Cargolux, a cargo-only airline, operate out of the airport. In 2008, the airport ranked as Europe's 5th largest and the world's 23rd by cargo tonnage. Luxair has regular passenger services to 20 European destinations and operates tourist flights to 17 more. Other airlines operating flights to and from Luxembourg include British Airways, KLM, Scandinavian Airlines, Swiss Global Air Lines, and TAP Portugal. A large new airport terminal building was opened in 2008 with more modern facilities, including an underground carpark. The trunk natural gas pipelines in Luxembourg have a total length of 155 km (2007). Russia and Norway are the main producers. The Luxembourg network is connected to Germany, France and Belgium. From CIA World Factbook: "Total:" 45 ships "Ships by type:" bulk 6, cargo 3, chemical tanker 15, container 4, liquefied gas 1, passenger 3, petroleum tanker 3, roll on/roll off 9 "note:" includes some foreign-owned ships registered here as a flag of convenience: Belgium 7, Denmark 1, France 17, Germany 5, Netherlands 2, United Kingdom 8, United States 4 (2008 est.) "Registered in other countries": Ukraine 1
https://en.wikipedia.org/wiki?curid=17836
Luxembourg Army The Luxembourg Army is the national military force of Luxembourg. The army has been a fully volunteer military since 1967. As of December 2018, it has 414 personnel. The army is under civilian control, with the Grand Duke as Commander-in-Chief. The Minister for Defence, currently François Bausch, oversees army operations. The professional head of the army is the Chief of Defence, who answers to the minister and holds the rank of general. Luxembourg has participated in the Eurocorps, has contributed troops to the UNPROFOR and IFOR missions in former Yugoslavia, and has participated with a small contingent in the current NATO SFOR mission in Bosnia and Herzegovina. Luxembourg troops have also deployed to Afghanistan, to support ISAF. The army has also participated in humanitarian relief missions such as setting up refugee camps for Kurds and providing emergency supplies to Albania. On 8 January 1817, William I, Grand Duke of Luxembourg, published a constitutional law governing the organization of a militia, the main provisions of which were to remain in force until the militia was abolished in 1881. The law fixed the militia's strength at 3,000 men. Until 1840, Luxembourg’s militiamen served in units of the Royal Netherlands Army. Enlisted men served for five years: the first year consisted of active service, but during each of the subsequent four years of service they were mobilised only three times per year. In 1839, William I became a party to the Treaty of London by which the Grand-Duchy lost its western, francophone territories to the Belgian province of Luxembourg. Due to the country's population having been halved, with the loss of 160,000 inhabitants, the militia lost half its strength. Under the terms of the treaty, Luxembourg and the newly formed Duchy of Limburg, both members of the German Confederation, were together required to provide a federal contingent consisting of a light infantry battalion garrisoned in Echternach, a cavalry squadron in Diekirch, and an artillery detachment in Ettelbruck. In 1846, the cavalry and artillery units were disbanded and the Luxembourg contingent was separated from that of Limburg. The Luxembourg contingent now consisted of two light infantry battalions, one in Echternach and the second in Diekirch; two reserve companies; and a depot company. In 1866, the Austro-Prussian war resulted in the dissolution of the German Confederation. Luxembourg was declared neutral in perpetuity by the 1867 Treaty of London, and in accordance, its fortress was demolished in the following years. In 1867, the Prussian garrison left the fortress, and the two battalions of Luxembourg light infantry entered the city of Luxembourg that September. A new military organization was established in 1867, consisting of two battalions, known as the "Corps des Chasseurs Luxembourgeois", having a total strength of 1,568 officers and men. In 1868, the contingent came to consist of one light infantry battalion of four companies, with a strength of 500 men. On 16 February 1881, the light infantry battalion was disbanded with the abolition of the militia-based system. On 16 February 1881, the "Corps des Gendarmes et Volontaires" (Corps of Gendarmes and Volunteers) was established. It was composed of two companies, a company of gendarmes and one of volunteers. In 1939, a corps of auxiliary volunteers was established and attached to the company of volunteers. Following the occupation of Luxembourg by Germany in May 1940, recruitment for the company of volunteers continued until 4 December 1940, when they were moved to Weimar, Germany, to be trained as German police. In 1944 during World War II, the Luxembourg Government, while exiled in London, made agreements for a group of seventy Luxembourg volunteers to be assigned to the Artillery Group of the 1st Belgian Infantry Brigade, commonly known as Brigade Piron, Jean-Baptiste Piron being the chief of this unit. This contingent was named the Luxembourg Battery. Initially, it was built up and trained by two Belgian officers. Later, from August 1944, these were joined by Luxembourg officers, who had received training in Britain. Several Luxembourg NCOs and half of the country's troops had fought in North Africa in the French Foreign Legion. The rest were people who had escaped from Luxembourg, and young men evading forcible conscription into the Wehrmacht by fleeing to Britain. The Luxembourg unit landed in Normandy on 6 August 1944—at approximately the same time as the Dutch Princess Irene Brigade and the French 2nd DB ("division blindée") commanded by General Leclerc—two months after the D-Day landings. The Luxembourg Battery was equipped with four Ordnance QF 25 pounder howitzers, which were named after the four daughters of Grand Duchess Charlotte: Princesses Elisabeth, Marie Adelaide, Marie Gabriele and Alix. In 1944, obligatory military service was introduced. In 1945, the Corps de la Garde Grand Ducale (Grand Ducal Guard Corps) garrisoned in the Saint-Esprit barracks in Luxembourg City and the 1st and 2nd infantry battalions were established, one in Walferdange and the other in Dudelange. The Luxembourg Army took charge of part of the French zone of occupation in Germany, the 2nd Battalion occupying part of the Bitburg district and a detachment from the 1st Battalion part of the Saarburg district. The 2nd Battalion remained in Bitburg until 1955. Luxembourg signed the Treaty of Brussels in March 1948, and the North Atlantic Treaty in 1949. Setting up an army after the war proved more difficult than predicted. To a certain extent, the authorities could rely on escaped German conscripts and Luxembourgers who had joined Allied armies; however, they had to find a way to train officers. Initially, British military advisers came to Luxembourg, where training was carried out by British officers and NCOs. But officer training, in the long term, would have to be done in military schools abroad. Belgium and France were both interested in helping and offered solutions. In the end, the government opted for a compromise solution, by sending some officer cadets to the École spéciale militaire de Saint-Cyr in France and others to the Royal Military Academy in Belgium. This eventually led to disunity within the Luxembourg officer corps due to differences in training and promotion. In 1951, the Grand Ducal Guard relocated to Walferdange and integrated with the Commandement des Troupes. The Guard had special units for reconnaissance, radiac reconnaissance, and anti-air warfare. From 1955, it was organised into a headquarters company, a garrison platoon, a reconnaissance company and two training companies. In 1959, the Commandement des Troupes was disbanded and the Grand Ducal Guard was integrated into the Commandement du Territoire (Territorial Command). The force was reduced to a single company, a corporals' training school, and a weapons platoon. In 1960, the Grand Ducal Guard was again reorganised into four platoons, temporarily grouped into intervention and reinforcement detachments. In 1964, the Grand Ducal Guard was organized into a HQ, three platoons, a reinforcement platoon, and the NCO school. On 28 February 1966, the Grand Ducal Guard was officially disbanded. In 1950, seventeen countries, including Luxembourg, decided to send armed forces to assist the Republic of Korea. The Luxembourg contingent was incorporated into the Belgian United Nations Command or the Korean Volunteer Corps. The Belgo-Luxemburgish battalion arrived in Korea in 1951, and was attached to the US 3rd Infantry Division. Two Luxembourger soldiers were killed and 17 were wounded in the war. The Belgo-Luxembourg battalion was disbanded in 1955. In 1954, the Groupement Tactique Régimentaire (GTR) (Regimental Tactical Group) was established as Luxembourg’s contribution to NATO. It consisted of three infantry battalions, an artillery battalion, and support, medical, transport, signals, engineering, heavy mortar, reconnaissance, and headquarters companies. The GTR was disbanded in 1959. In addition to the GTR, the Army also included the Territorial Command, composed of headquarters, military police, movement and transportation companies, a static guard battalion, and a mobile battalion. In 1961, the 1st Artillery Battalion was placed at NATO's disposal. The battalion was organised into three batteries, each with six field howitzers (British 25 pounder guns converted to 105 mm caliber) from the former GTR artillery battalion, an HQ battery, and a service battery. In 1963, the battalion was attached to the US 8th Infantry Division. In 1966, the Grand Ducal Guard was disbanded and its tasks were transferred to and performed by the 1st Artillery Battalion until it too was disbanded, in 1967. Compulsory military service was abolished in 1967 and the 1st Infantry Battalion was established, consisting of a headquarters and services unit, two motorized infantry companies, and a reconnaissance company with two reconnaissance (recce) platoons and an anti-tank platoon. From 1968 onwards, it formed a part of NATO’s ACE Mobile Force (Land) (AMF(L)). In 1985, a reinforced company—consisting of an AMF Company with two recce platoons and an anti-tank platoon, a forward air-control team, a national support element for logistics, and a medical support element—replaced the battalion. In 2002 the AMF(L) was dissolved. Luxembourg has participated in the Eurocorps since 1994, has contributed troops to the UNPROFOR and IFOR missions in former Yugoslavia, and has participated with a small contingent in the current NATO SFOR mission in Bosnia and Herzegovina. The Luxembourg army is integrated into the Multinational Beluga Force under Belgian command. Luxembourg troops have also deployed to Afghanistan, to support ISAF. Luxembourg financially supported international peacekeeping missions during the 1991 Persian Gulf War, in Rwanda, and in Albania. The army has also participated in humanitarian relief missions such as setting up refugee camps for Kurds and providing emergency supplies to Albania. The army is under civilian control, with the Grand Duke as Commander-in-Chief. The Minister for Defence, François Bausch (starting 5 December 2018), oversees army operations. The professional head of the army is the Chief of Defence, currently Alain Duschène, who answers to the minister. The Grand Duke and the Chief of Defence are the only generals, with colonels as Deputy Chief of Defence and head of the Military Training Centre. Until 1999, the army was integrated into the Force Publique (Public Force), which included the Gendarmerie and the Police, until the Gendarmerie was merged with the Grand Ducal Police under a different minister in 2000. The army has been an all-volunteer force since 1967. It has a strength of 414 professional soldiers with a total budget of approximately $320 million, or 0.55% of GDP in 2019. The Luxembourg Army is a battalion-sized formation with four separate "compagnies" (companies) under the control of the "Centre Militaire" (Military Centre), located in the Caserne Grand-Duc Jean barracks on Herrenberg hill near the town of Diekirch. Luxembourg has no navy, as the country is landlocked. It has no air force either, although it does have aircraft. Compagnie A, the first of two rifle companies that forms the Luxembourg contingent of the Eurocorps, is normally integrated into the Belgian contribution during operations. As such, it participates in Eurocorps' contribution to the NATO Response Force (entire company) and the EU Battlegroups (one platoon). The company consists of a command element and three reconnaissance platoons of four sections each, plus a command section. Each section is equipped with two armoured M1114 HMMWVs, each armed with a .50 caliber M2 Browning machine gun. The command section has a MAN X40 truck in addition to its pair of HMMWVs. Compagnie B, currently known as the Reconversion Service, is the educational unit of the Army, providing various educational courses for personnel to take in preparation for advancement. On 19 May 2011, Company B was redesignated as the "Service de Reconversion" (Reconversion Service) with the mission to prepare volunteer soldiers for the return to civilian life. The service includes the "L'Ecole de l'Armee" (Army School). In order to attend this school a soldier must have at least eighteen months of service. The school is divided into two sections: Compagnie C, better known as the Compagnie Commandement et Instruction (Staff & Instruction Company), is the main military training unit of the Luxembourg Army, with instruction given in: This company is also responsible for the army's Elite Sports Section, reserved for sportsmen in the Army. Following their basic training, these soldiers join the Section de Sports d'Elite de l'Armée (SSEA). Compagnie D is the second rifle company – it provided Luxembourg's contribution to NATO's ACE Mobile Force (Land) (disbanded in 2002) as the Luxembourg Reconnaissance Company. Luxembourg's participation in various UN, EU, and NATO missions is drawn from Compagnie D, which mirrors Compagnie A in organisation, with a command element and three reconnaissance platoons. Luxembourg has two reconnaissance platoons, declared to NATO's Allied Mobile Force. They are typically referred to as Elite Forces, rather than Special Forces. The Grand Ducal Police has the Unité Spéciale de la Police. Luxembourg military uniforms are similar to British uniforms, consisting of dress, service or garrison, and field uniforms, often worn with a black beret. Dress uniforms are worn mostly on formal occasions, while service uniforms are worn for daily duty. Luxembourg Army uniforms consist of service and field attire for summer and winter, as well as a dress uniform and mess jacket for officers. The winter service dress uniform, of olive drab wool, consists of a single-breasted coat having patch pockets with flaps, a khaki shirt and tie, and trousers that are usually cuffless. The summer uniform is similar, but made of light tan material. Combat uniforms in olive-green, khaki-drab, or woodland camouflage pattern, worn with the United States PASGT helmet, are virtually indistinguishable from those worn until the 1990s by the United States Army. In 2010 Luxembourg adopted their own camouflage pattern. Those who have completed high school will enter a special thirteen-week basic training in the Army as warrant officers, then attend the military officer school for five years (normally in Brussels, Belgium), before becoming a lieutenant in the Luxembourg Army. Aspiring officers are sent to the Belgian École Royale Militaire in Brussels, or the Ecole Spéciale Militaire de Saint-Cyr in France. After the first two years at these schools, officer-cadets receive the title of lieutenant. After leaving military academy, officer candidates become probationary officers for a period of twenty-four months. The probation period consists of specialised military-branch training at a school abroad, and practical service within one of the Army's units. If they succeed during this probation, their appointment as lieutenants is made permanent. Those who have completed five years of high school and have served four months as voluntary soldiers, will do a nine-month stage at the Infantry Training Department of the Belgian Army in Arlon, before becoming sergeants in the Luxembourg Army. Those who have not completed five years of high school may, after three years of service, become career corporals in the Luxembourg Army, if they pass physical and mental tests. They also have to pass a part of the NCO School in Belgium. Luxembourg has a small and capable air wing. All NATO AWACS planes are registered to the LAF and sport the Luxembourg Army roundel.
https://en.wikipedia.org/wiki?curid=17837
London Underground The London Underground (also known simply as the Underground, or by its nickname the Tube) is a rapid transit system serving Greater London and some parts of the adjacent counties of Buckinghamshire, Essex and Hertfordshire in the United Kingdom. The Underground has its origins in the Metropolitan Railway, the world's first underground passenger railway. Opened in January 1863, it is now part of the Circle, Hammersmith & City and Metropolitan lines; the first line to operate underground electric traction trains, the City & South London Railway in 1890, is now part of the Northern line. The network has expanded to 11 lines, and in 2017/18 carried 1.357 billion passengers, making it the world's 12th busiest metro system. The 11 lines collectively handle up to 5 million passengers a day. The system's first tunnels were built just below the ground, using the cut-and-cover method; later, smaller, roughly circular tunnels—which gave rise to its nickname, the Tube—were dug through at a deeper level. The system has 270 stations and of track. Despite its name, only 45% of the system is underground in tunnels, with much of the network in the outer environs of London being on the surface. In addition, the Underground does not cover most southern parts of London region, and there are only 29 stations south of the River Thames. The early tube lines, originally owned by several private companies, were brought together under the "" brand in the early 20th century and eventually merged along with the sub-surface lines and bus services in 1933 to form "London Transport" under the control of the London Passenger Transport Board (LPTB). The current operator, London Underground Limited (LUL), is a wholly owned subsidiary of Transport for London (TfL), the statutory corporation responsible for the transport network in London region. , 92% of operational expenditure is covered by passenger fares. The Travelcard ticket was introduced in 1983 and Oyster, a contactless ticketing system, in 2003. Contactless bank card payments were introduced in 2014, the first public transport system in the world to do so. The LPTB has commissioned many new station buildings, posters and public artworks in a modernist style. The schematic Tube map, designed by Harry Beck in 1931, was voted a national design icon in 2006 and now includes other TfL transport systems such as the Docklands Light Railway, London Overground, TfL Rail, and Tramlink. Other famous London Underground branding includes the roundel and Johnston typeface, created by Edward Johnston in 1916. The idea of an underground railway linking the City of London with the urban centre was proposed in the 1830s, and the Metropolitan Railway was granted permission to build such a line in 1854. To prepare construction, a short test tunnel was built in 1855 in Kibblesworth, a small town with geological properties similar to London. This test tunnel was used for two years in the development of the first underground train, and was later, in 1861, filled up. The world's first underground railway, it opened in January 1863 between Paddington and Farringdon using gas-lit wooden carriages hauled by steam locomotives. It was hailed as a success, carrying 38,000 passengers on the opening day, and borrowing trains from other railways to supplement the service. The Metropolitan District Railway (commonly known as the District Railway) opened in December 1868 from South Kensington to Westminster as part of a plan for an underground "inner circle" connecting London's main-line stations. The Metropolitan and District railways completed the Circle line in 1884, built using the cut and cover method. Both railways expanded, the District building five branches to the west reaching Ealing, Hounslow, Uxbridge, Richmond and Wimbledon and the Metropolitan eventually extended as far as in Buckinghamshire, more than from Baker Street and the centre of London. For the first deep-level tube line, the City and South London Railway, two diameter circular tunnels were dug between King William Street (close to today's Monument station) and Stockwell, under the roads to avoid the need for agreement with owners of property on the surface. This opened in 1890 with electric locomotives that hauled carriages with small opaque windows, nicknamed "padded cells". The Waterloo and City Railway opened in 1898, followed by the Central London Railway in 1900, known as the "twopenny tube". These two ran electric trains in circular tunnels having diameters between and , whereas the Great Northern and City Railway, which opened in 1904, was built to take main line trains from Finsbury Park to a Moorgate terminus in the City and had diameter tunnels. While steam locomotives were in use on the Underground there were contrasting health reports. There were many instances of passengers collapsing whilst travelling, due to heat and pollution, leading for calls to clean the air through the installation of garden plants. The Metropolitan even encouraged beards for staff to act as an air filter. There were other reports claiming beneficial outcomes of using the Underground, including the designation of Great Portland Street as a "sanatorium for [sufferers of ...] asthma and bronchial complaints", tonsillitis could be cured with acid gas and the Twopenny Tube cured anorexia. With the advent of electric Tube services (the Waterloo and City Railway and the Great Northern and City Railway), the Volks Electric Railway, in Brighton, and competition from electric trams, the pioneering Underground companies needed modernising. In the early 20th century, the District and Metropolitan railways needed to electrify and a joint committee recommended an AC system, the two companies co-operating because of the shared ownership of the inner circle. The District, needing to raise the finance necessary, found an investor in the American Charles Yerkes who favoured a DC system similar to that in use on the City & South London and Central London railways. The Metropolitan Railway protested about the change of plan, but after arbitration by the Board of Trade, the DC system was adopted. Yerkes soon had control of the District Railway and established the Underground Electric Railways Company of London (UERL) in 1902 to finance and operate three tube lines, the Baker Street and Waterloo Railway (Bakerloo), the Charing Cross, Euston and Hampstead Railway (Hampstead) and the Great Northern, Piccadilly and Brompton Railway, (Piccadilly), which all opened between 1906 and 1907. When the "Bakerloo" was so named in July 1906, "The Railway Magazine" called it an undignified "gutter title". By 1907 the District and Metropolitan Railways had electrified the underground sections of their lines. In January 1913, the UERL acquired the Central London Railway and the City & South London Railway, as well as many of London's bus and tram operators. Only the Metropolitan Railway, along with its subsidiaries the Great Northern & City Railway and the East London Railway, and the Waterloo & City Railway, by then owned by the main line London and South Western Railway, remained outside the Underground Group's control. A joint marketing agreement between most of the companies in the early years of the 20th century included maps, joint publicity, through ticketing and UNDERGROUND signs, incorporating the first bullseye symbol, outside stations in Central London. At the time, the term Underground was selected from three other proposed names; 'Tube' and 'Electric' were both officially rejected. Ironically, the term Tube was later adopted alongside the Underground. The Bakerloo line was extended north to Queen's Park to join a new electric line from Euston to Watford, but World War I delayed construction and trains reached in 1917. During air raids in 1915 people used the tube stations as shelters. An extension of the Central line west to Ealing was also delayed by the war and was completed in 1920. After the war government-backed financial guarantees were used to expand the network and the tunnels of the City and South London and Hampstead railways were linked at Euston and Kennington; the combined service was not named the Northern line until later. The Metropolitan promoted housing estates near the railway with the "Metro-land" brand and nine housing estates were built near stations on the line. Electrification was extended north from Harrow to Rickmansworth, and branches opened from Rickmansworth to Watford in 1925 and from Wembley Park to Stanmore in 1932. The Piccadilly line was extended north to Cockfosters and took over District line branches to Harrow (later Uxbridge) and Hounslow. In 1933, most of London's underground railways, tramway and bus services were merged to form the London Passenger Transport Board, which used the London Transport brand. The Waterloo & City Railway, which was by then in the ownership of the main line Southern Railway, remained with its existing owners. In the same year that the London Passenger Transport Board was formed, Harry Beck's diagrammatic tube map first appeared. In the following years, the outlying lines of the former Metropolitan Railway closed, the Brill Tramway in 1935, and the line from Quainton Road to Verney Junction in 1936. The 1935–40 New Works Programme included the extension of the Central and Northern lines and the Bakerloo line to take over the Metropolitan's Stanmore branch. World War II suspended these plans after the Bakerloo line had reached Stanmore and the Northern line High Barnet and Mill Hill East in 1941. Following bombing in 1940 passenger services over the West London line were suspended, leaving Olympia exhibition centre without a railway service until a District line shuttle from Earl's Court began after the war. After work restarted on the Central line extensions in east and west London, these were completed in 1949. During the war many tube stations were used as air-raid shelters. On 3 March 1943, a test of the air-raid warning sirens, together with the firing of a new type of anti-aircraft rocket, resulted in a crush of people attempting to take shelter in Bethnal Green Underground station. A total of 173 people, including 62 children, died, making this both the worst civilian disaster of World War II, and the largest loss of life in a single incident on the London Underground network. On 1 January 1948, under the provisions of the Transport Act 1947, the London Passenger Transport Board was nationalised and renamed the London Transport Executive, becoming a subsidiary transport organisation of the British Transport Commission, which was formed on the same day. Under the same act, the country's main line railways were also nationalised, and their reconstruction was given priority over the maintenance of the Underground and most of the unfinished plans of the pre-war New Works Programme were shelved or postponed. The District line needed new trains and an unpainted aluminium train entered service in 1953, this becoming the standard for new trains. In the early 1960s the Metropolitan line was electrified as far as Amersham, British Railways providing services for the former Metropolitan line stations between Amersham and Aylesbury. In 1962, the British Transport Commission was abolished, and the London Transport Executive was renamed the London Transport Board, reporting directly to the Minister of Transport. Also during the 1960s, the Victoria line was dug under central London and, unlike the earlier tunnels, did not follow the roads above. The line opened in 1968–71 with the trains being driven automatically and magnetically encoded tickets collected by automatic gates gave access to the platforms. On 1 January 1970 responsibility for public transport within Greater London passed from central government to local government, in the form of the Greater London Council (GLC), and the London Transport Board was abolished. The London Transport brand continued to be used by the GLC. On 28 February 1975, a southbound train on the Northern City Line failed to stop at its Moorgate terminus and crashed into the wall at the end of the tunnel, in the Moorgate tube crash. There were 43 deaths and 74 injuries, the greatest loss of life during peacetime on the London Underground. In 1976 the Northern City Line was taken over by British Rail and linked up with the main line railway at Finsbury Park, a transfer that had already been planned prior to the accident. In 1979 another new tube, the Jubilee line, named in honour of Queen Elizabeth's Silver Jubilee, took over the Stanmore branch from the Bakerloo line, linking it to a newly constructed tube between Baker Street and Charing Cross stations. Under the control of the GLC, London Transport introduced a system of fare zones for buses and underground trains that cut the average fare in 1981. Fares increased following a legal challenge but the fare zones were retained, and in the mid-1980s the Travelcard and the Capitalcard were introduced. In 1984 control of London Buses and the London Underground passed back to central government with the creation of London Regional Transport (LRT), which reported directly to the Secretary of State for Transport, still retaining the London Transport brand. One person operation had been planned in 1968, but conflict with the trade unions delayed introduction until the 1980s. On 18 November 1987, fire broke out in an escalator at King's Cross St. Pancras tube station. The resulting fire cost the lives of 31 people and injured a further 100. London Underground were strongly criticised in the aftermath for their attitude to fires underground, and publication of the report into the fire led to the resignation of senior management of both London Underground and London Regional Transport. To comply with new safety regulations issued as a result of the fire, and to combat graffiti, a train refurbishment project was launched in July 1991. In April 1994, the Waterloo & City Railway, by then owned by British Rail and known as the Waterloo & City line, was transferred to the London Underground. In 1999, the Jubilee line was extended from Green Park station through Docklands to Stratford station, resulting in the closure of the short section of tunnel between Green Park and Charing Cross stations, and including the first stations on the London Underground to have platform edge doors. Transport for London (TfL) was created in 2000 as the integrated body responsible for London's transport system. TfL is part of the Greater London Authority and is constituted as a statutory corporation regulated under local government finance rules. The TfL Board is appointed by the Mayor of London, who also sets the structure and level of public transport fares in London. The day-to-day running of the corporation is left to the Commissioner of Transport for London. TfL eventually replaced London Regional Transport, and discontinued the use of the London Transport brand in favour of its own brand. The transfer of responsibility was staged, with transfer of control of London Underground delayed until July 2003, when London Underground Limited became an indirect subsidiary of TfL. Between 2000 and 2003, London Underground was reorganised in a Public-Private Partnership where private infrastructure companies (infracos) upgraded and maintained the railway. This was undertaken before control passed to TfL, who were opposed to the arrangement. One infraco - Metronet - went into administration in 2007 and TfL took over the responsibilities, TfL taking over the other - Tube Lines - in 2010. Electronic ticketing in the form of the contactless Oyster card was introduced in 2003. London Underground services on the East London line ceased in 2007 so that it could be extended and converted to London Overground operation, and in December 2009 the Circle line changed from serving a closed loop around the centre of London to a spiral also serving Hammersmith. Since September 2014, passengers have been able to use contactless bank cards on the Tube. Their use has grown very quickly and now over a million contactless transactions are made on the Underground every day. As of 2017, the Underground serves 270 stations. Sixteen Underground stations are outside London region, eight on the Metropolitan line and eight on the Central line. Of these, five (Amersham, Chalfont & Latimer, Chesham, and Chorleywood on the Metropolitan line, and Epping on the Central line), are beyond the M25 London Orbital motorway. Of the 32 London boroughs, six (Bexley, Bromley, Croydon, Kingston, Lewisham and Sutton) are not served by the Underground network, while Hackney has Old Street (on the Northern line Bank branch) and Manor House (on the Piccadilly line) only just inside its boundaries. Lewisham used to be served by the East London line (stations at New Cross and New Cross Gate). The line and the stations were transferred to the London Overground network in 2010. London Underground's eleven lines total in length, making it the fifth longest metro system in the world. These are made up of the sub-surface network and the deep-tube lines. The Circle, District, Hammersmith & City, and Metropolitan lines form the sub-surface network, with railway tunnels just below the surface and of a similar size to those on British main lines, converging on a circular bi-directional loop around zone 1. The Hammersmith & City and Circle lines share stations and most of their track with each other, as well as with the Metropolitan and District lines. The Bakerloo, Central, Jubilee, Northern, Piccadilly, Victoria and Waterloo & City lines are deep-level tubes, with smaller trains that run in two circular tunnels ("tubes") with a diameter about . These lines have the exclusive use of a pair of tracks, except for the Uxbridge branch of the Piccadilly line, which shares track with the District line between Acton Town and Hanger Lane Junction and with the Metropolitan line between Rayners Lane and Uxbridge; and the Bakerloo line, which shares track with London Overground's Watford DC Line for its aboveground section north of Queen's Park. Fifty-five per cent of the system runs on the surface. There are of cut-and-cover tunnel and of tube tunnel. Many of the central London underground stations on deep-level tube routes are higher than the running lines to assist deceleration when arriving and acceleration when departing. Trains generally run on the left-hand track. In some places, the tunnels are above each other (for example, the Central line east of St Paul's station), or the running tunnels are on the right (for example on the Victoria line between Warren Street and King's Cross St. Pancras, to allow cross-platform interchange with the Northern line at Euston). The lines are electrified with a four-rail DC system: a conductor rail between the rails is energised at −210 V and a rail outside the running rails at +420 V, giving a potential difference of 630 V. On the sections of line shared with mainline trains, such as the District line from East Putney to Wimbledon and Gunnersbury to Richmond, and the Bakerloo line north of Queen's Park, the centre rail is bonded to the running rails. The average speed on the Underground is . Outside the tunnels of central London, many lines' trains tend to travel at over in the suburban and countryside areas. The Metropolitan line can reach speeds of . The London Underground was used by 1.357 billion passengers in 2017/2018. The Underground uses several railways and alignments that were built by main-line railway companies. Some tracks now in LU ownership remain in use by main line services. London Underground trains come in two sizes, larger sub-surface trains and smaller deep-tube trains. Since the early 1960s all passenger trains have been electric multiple units with sliding doors and a train last ran with a guard in 2000. All lines use fixed length trains with between six and eight cars, except for the Waterloo & City line that uses four cars. New trains are designed for maximum number of standing passengers and for speed of access to the cars and have regenerative braking and public address systems. Since 1999 all new stock has had to comply with accessibility regulations that require such things as access and room for wheelchairs, and the size and location of door controls. All underground trains are required to comply with The Rail Vehicle Accessibility (Non Interoperable Rail System) Regulations 2010 (RVAR 2010) by 2020. Stock on sub-surface lines is identified by a letter (such as S Stock, used on the Metropolitan line), while tube stock is identified by the year of intended introduction (for example, 1996 Stock, used on the Jubilee line). The Underground is served by the following depots: In the years since the first parts of the London Underground opened, many stations and routes have been closed. Some stations were closed because of low passenger numbers rendering them uneconomical; some became redundant after lines were re-routed or replacements were constructed; and others are no longer served by the Underground but remain open to National Rail main line services. In some cases, such as Aldwych, the buildings remain and are used for other purposes. In others, such as British Museum, all evidence of the station has been lost through demolition. When the Bakerloo line opened in 1906 it was advertised with a maximum temperature of , but over time the tube tunnels have warmed up. In 1938 approval was given for a ventilation improvement programme, and a refrigeration unit was installed in a lift shaft at Tottenham Court Road. Temperatures of were reported in the 2006 European heat wave. It was claimed in 2002 that, if animals were being transported, temperatures on the Tube would break European Commission animal welfare laws. A 2000 study reported that air quality was seventy-three times worse than at street level, with a passenger breathing the same mass of particulates during a twenty-minute journey on the Northern line as when smoking a cigarette. The main purpose of the London Underground's ventilation fans is to extract hot air from the tunnels, and fans across the network are being refurbished, although complaints of noise from local residents preclude their use at full power at night. In June 2006 a groundwater cooling system was installed at Victoria station. In 2012, air-cooling units were installed on platforms at Green Park station using cool deep groundwater and at Oxford Circus using chiller units at the top of an adjacent building. New air-conditioned trains are being introduced on the sub-surface lines, but space is limited on tube trains for air-conditioning units and these would heat the tunnels even more. The Deep Tube Programme, investigating replacing the trains for the Bakerloo, Central, Waterloo and City and Piccadilly lines, is looking for trains with better energy conservation and regenerative braking, on which it might be possible to install a form of air conditioning. In the original Tube design, trains passing through close fitting tunnels act as pistons to create air pressure gradients between stations. This pressure difference drives ventilation between platforms and the surface exits through the passenger foot network. This system depends on adequate cross sectional area of the airspace above the passengers’ heads in the foot tunnels and escalators, where laminar airflow is proportional to the fourth power of the radius, the Hagen–Poiseuille equation. It also depends on an absence of turbulence in the tunnel headspace. In many stations the ventilation system is now ineffective because of alterations that reduce tunnel diameters and increase turbulence. An example is Green Park tube station, where false ceiling panels attached to metal frames have been installed that reduce the above-head airspace diameter by more than half in many parts. This has the effect of reducing laminar airflow by 94%. Originally air turbulence was kept to a minimum by keeping all signage flat to the tunnel walls. Now the ventilation space above head height is crowded with ducting, conduits, cameras, speakers and equipment acting as a baffle plates with predictable reductions in flow. Often electronic signs have their flat surface at right angles to the main air flow, causing choked flow. Temporary sign boards that stand at the top of escalators also maximise turbulence. The alterations to the ventilation system are important, not only to heat exchange, but also the quality of the air at platform level, particularly given its asbestos content. Originally access to the deep-tube platforms was by a lift. Each lift was staffed, and at some quiet stations in the 1920s the ticket office was moved into the lift, or it was arranged that the lift could be controlled from the ticket office. The first escalator on the London Underground was installed in 1911 between the District and Piccadilly platforms at Earl's Court and from the following year new deep-level stations were provided with escalators instead of lifts. The escalators had a diagonal shunt at the top landing. In 1921 a recorded voice instructed passengers to stand on the right and signs followed in World War II. Travellers were asked to stand on the right so that anyone wishing to overtake them would have a clear passage on the left side of the escalator. The first 'comb' type escalator was installed in 1924 at Clapham Common. In the 1920s and 1930s many lifts were replaced by escalators. After the fatal 1987 King's Cross fire, all wooden escalators were replaced with metal ones and the mechanisms are regularly degreased to lower the potential for fires. The only wooden escalator not to be replaced was at Greenford station, which remained until March 2014 when TfL replaced it with the first incline lift on the UK transport network in October 2015. There are 426 escalators on the London Underground system and the longest, at , is at Angel. The shortest, at Stratford, gives a vertical rise of . There are 184 lifts, and numbers have increased in recent years because of investment making tube stations accessible. Over 28 stations will have lifts installed over the next 10 years, bring the total of step-free stations to over 100. In mid-2012 London Underground, in partnership with Virgin Media, tried out Wi-Fi hot spots in many stations, but not in the tunnels, that allowed passengers free internet access. The free trial proved successful and was extended to the end of 2012 whereupon it switched to a service freely available to subscribers to Virgin Media and others, or as a paid-for service. It is not currently possible to use mobile phones underground using native 2G, 3G or 4G networks, and a project to extend coverage before the 2012 Olympics was abandoned because of commercial and technical difficulties. UK subscribers to the Three mobile network can use the InTouch app to route their voice calls and texts messages via the Virgin Media Wifi network at 138 London Transport stations. The EE network also has recently released a WiFi calling feature available on the iPhone. The Northern Line is being extended from Kennington to Battersea Power Station via Nine Elms, serving the Battersea Power Station and Nine Elms development areas. In April 2013, Transport for London applied for the legal powers of a Transport and Works Act Order to proceed with the extension. Preparation works started in early 2015. The main tunnelling was completed in November 2017, having started in April. The extension is due to open in 2021 . Provision will be made for a possible future extension to by notifying the London Borough of Wandsworth of a reserved course under Battersea Park and subsequent streets. The Croxley Rail Link involves re-routing the Metropolitan line's Watford branch from the current terminus at Watford over part of the disused Croxley Green branch line to with stations at Cassiobridge, Watford Vicarage Road and (which is currently only a part of London Overground). Funding was agreed in December 2011, and the final approval for the extension was given on 24 July 2013, with the aim of completion by 2020. In 2015, TfL took over responsibility for designing and building the extension from Hertfordshire County Council, and after further detailed design work concluded that an additional £50m would be needed. As of November 2017, the project is on hold awaiting additional funding. In 1931, the extension of the Bakerloo line from Elephant & Castle to Camberwell was approved, with stations at Albany Road and an interchange at . With post-war austerity, the plan was abandoned. In 2006, Ken Livingstone, the then Mayor of London, announced that within twenty years Camberwell would have a tube station. Plans for an extension from Elephant & Castle to Lewisham via the Old Kent Road and are currently being developed by Transport for London, with possible completion by 2029. In 2007, as part of the planning for the transfer of the North London line to what became London Overground, TfL proposed re-extending the Bakerloo line to . In 2011, the London Borough of Hillingdon has proposed that the Central line be extended from West Ruislip to Uxbridge via Ickenham, claiming this would cut traffic on the A40 in the area. According to the "New Civil Engineer", the Canary Wharf Group has suggested the construction of a new rail line between Euston and Canary Wharf. The proposal is being considered by the government. In mid-2014 Transport for London issued a tender for up to 18 trains for the Jubilee line and up to 50 trains for the Northern line. These would be used to increase frequencies and cover the Battersea extension on the Northern line. In early 2014 the Bakerloo, Central, Piccadilly and Waterloo & City line rolling-stock replacement project was renamed "New Tube for London" (NTfL) and moved from the feasibility stage to the design and specification stage. The study had showed that, with new generation trains and re-signalling: The project is estimated to cost £16.42 billon (£9.86 bn at 2013 prices). A notice was published on 28 February 2014 in the Official Journal of the European Union asking for expressions of interest in building the trains. On 9 October 2014 TFL published a shortlist of those (Alstom, Siemens, Hitachi, CAF and Bombardier) who had expressed an interest in supplying 250 trains for between £1.0 billion and £2.5 billion, and on the same day opened an exhibition with a design by PriestmanGoode. The fully automated trains may be able to run without drivers, but the ASLEF and RMT trade unions that represent the drivers strongly oppose this, saying it would affect safety. The invitation to tender for the trains was issued in January 2016; the specifications for the Piccadilly line infrastructure are expected in 2016, and the first train is due to run on the Piccadilly line in 2023. Siemens Mobility's Inspiro design was selected in June 2018 in a £1.5 billion contract. The Underground received £2.669 billion in fares in 2016/17 and uses Transport for London's zonal fare system to calculate fares. There are nine zones, zone 1 being the central zone, which includes the loop of the Circle line with a few stations to the south of River Thames. The only London Underground stations in Zones 7 to 9 are on the Metropolitan line beyond Moor Park, outside London region. Some stations are in two zones, and the cheapest fare applies. Paper tickets, the contactless Oyster cards, contactless debit or credit cards and Apple Pay and Android Pay smartphones and watches can be used for travel. Single and return tickets are available in either format, but Travelcards (season tickets) for longer than a day are available only on Oyster cards. TfL introduced the Oyster card in 2003; this is a pre-payment smartcard with an embedded contactless RFID chip. It can be loaded with Travelcards and used on the Underground, the Overground, buses, trams, the Docklands Light Railway, and National Rail services within London. Fares for single journeys are cheaper than paper tickets, and a daily cap limits the total cost in a day to the price of a Day Travelcard. The Oyster card must be 'touched in' at the start and end of a journey, otherwise it is regarded as 'incomplete' and the maximum fare is charged. In March 2012 the cost of this in the previous year to travellers was £66.5 million. In 2014, TfL became the first public transport provider in the world to accept payment from contactless bank cards. The Underground first started accepting contactless debit and credit cards in September 2014. This was followed by the adoption of Apple Pay in 2015 and Android Pay in 2016, allowing payment using a contactless-enabled phone or smartwatch. Over 500 million journeys have taken place using contactless, and TfL has become one of Europe's largest contactless merchants, with around 1 in 10 contactless transactions in the UK taking place on across the TfL network. This technology, developed in-house by TfL, has been licensed to other major cities like New York City and Boston. A concessionary fare scheme is operated by London Councils for residents who are disabled or meet certain age criteria. Residents born before 1951 were eligible after their 60th birthday, whereas those born in 1955 will need to wait until they are 66. Called a "Freedom Pass" it allows free travel on TfL-operated routes at all times and is valid on some National Rail services within London at weekends and after 09:30 on Monday to Fridays. Since 2010, the Freedom Pass has included an embedded holder's photograph; it lasts five years between renewals. In addition to automatic and staffed faregates at stations, the Underground also operates on a proof-of-payment system. The system is patrolled by both uniformed and plain-clothes fare inspectors with hand-held Oyster-card readers. Passengers travelling without a valid ticket must pay a penalty fare of £80 (£40 if paid within 21 days) and can be prosecuted for fare evasion under the Regulation of Railways Act 1889 and Transport for London Byelaws. The tube closes overnight during the week, but since 2016, the Central, Jubilee, Northern, Piccadilly, and Victoria lines, as well as a short section of the London Overground have operated all night on Friday and Saturday nights. The first trains run from about 05:00 and the last trains until just after 01:00, with later starting times on Sunday mornings. The nightly closures are used for maintenance, but some lines stay open on New Year's Eve and run for longer hours during major public events such as the 2012 London Olympics. Some lines are occasionally closed for scheduled engineering work at weekends. The Underground runs a limited service on Christmas Eve with some lines closing early, and does not operate on Christmas Day. Since 2010 a dispute between London Underground and trade unions over holiday pay has resulted in a limited service on Boxing Day. On 19 August 2016, London Underground launched a 24-hour service on the Victoria and Central lines with plans in place to extend this to the Piccadilly, Northern and Jubilee lines starting on Friday morning and continuing right through until Sunday evening. The Night Tube proposal was originally scheduled to start on 12 September 2015, following completion of upgrades, but in August 2015 it was announced that the start date for the Night Tube had been pushed back because of ongoing talks about contract terms between trade unions and London Underground. On 23 May 2016 it was announced that the night service would launch on 19 August 2016 for the Central and Victoria lines. The service operates on the: The Jubilee, Piccadilly and Victoria lines, and the Central line between White City and Leytonstone, operate at 10-minute intervals. The Central line operates at 20-minute intervals between Leytonstone and Hainault, between Leytonstone and Loughton, and between White City and Ealing Broadway. The Northern line operates at roughly 8-minute intervals between Morden and Camden Town via Charing Cross, and at 15-minute intervals between Camden Town and Edgware and between Camden Town and High Barnet. Accessibility for people with limited mobility was not considered when most of the system was built, and before 1993 fire regulations prohibited wheelchairs on the Underground. The stations on the Jubilee Line Extension, opened in 1999, were the first stations on the system designed with accessibility in mind, but retrofitting accessibility features to the older stations is a major investment that is planned to take over twenty years. A 2010 London Assembly report concluded that over 10% of people in London had reduced mobility and, with an aging population, numbers will increase in the future. The standard issue tube map indicates stations that are step-free from street to platforms. There can also be a step from platform to train as large as and a gap between the train and curved platforms, and these distances are marked on the map. Access from platform to train at some stations can be assisted using a boarding ramp operated by staff, and a section has been raised on some platforms to reduce the step. , there are 79 stations with step-free access from platform to train, and there are plans to provide step-free access at another 19 stations by 2024. By 2016 a third of stations had platform humps that reduce the step from platform to train. New trains, such as those being introduced on the sub-surface network, have access and room for wheelchairs, improved audio and visual information systems and accessible door controls. During peak hours, stations can get so crowded that they need to be closed. Passengers may not get on the first train and the majority of passengers do not find a seat on their trains, some trains having more than four passengers every square metre. When asked, passengers report overcrowding as the aspect of the network that they are least satisfied with, and overcrowding has been linked to poor productivity and potential poor heart health. Capacity increases have been overtaken by increased demand, and peak overcrowding has increased by 16 percent since 2004/5. Compared with 2003/4, the reliability of the network had increased in 2010/11, with lost customer hours reduced from 54 million to 40 million. Passengers are entitled to a refund if their journey is delayed by 15 minutes or more due to circumstances within the control of TfL, and in 2010, 330,000 passengers of a potential 11 million Tube passengers claimed compensation for delays. Mobile phone apps and services have been developed to help passengers claim their refund more efficiently. London Underground is authorised to operate trains by the Office of Rail Regulation there had been 310 days since the last major incident, when a passenger had died after falling on the track. there have been nine consecutive years in which no employee fatalities have occurred. A special staff training facility was opened at West Ashfield tube station in TFL's Ashfield House, West Kensington in 2010 at a cost of £800,000. Meanwhile, Mayor of London Boris Johnson decided it should be demolished along with the Earls Court Exhibition Centre as part of Europe's biggest regeneration scheme. In November 2011 it was reported that 80 people had died by suicide in the previous year on the London Underground, up from 46 in 2000. Most platforms at deep tube stations have pits, often referred to as 'suicide pits', beneath the track. These were constructed in 1926 to aid drainage of water from the platforms, but also halve the likelihood of a fatality when a passenger falls or jumps in front of a train. The Tube Challenge is the competition for the fastest time to travel to all London Underground stations, tracked by Guinness World Records since 1960. The goal is to visit all the stations on the system, but not necessarily using all the lines; participants may connect between stations on foot, or by using other forms of public transport. As of 2019, the record for fastest completion was held by Andi James (Finland) and Steve Wilson (UK), who completed the challenge in 15 hours, 45 minutes and 38 seconds on 21 May 2015. Early maps of the Metropolitan and District railways were city maps with the lines superimposed, and the District published a pocket map in 1897. A Central London Railway route diagram appears on a 1904 postcard and 1905 poster, similar maps appearing in District Railway cars in 1908. In the same year, following a marketing agreement between the operators, a joint central area map that included all the lines was published. A new map was published in 1921 without any background details, but the central area was squashed, requiring smaller letters and arrows. Harry Beck had the idea of expanding this central area, distorting geography, and simplifying the map so that the railways appeared as straight lines with equally spaced stations. He presented his original draft in 1931, and after initial rejection it was first printed in 1933. Today's tube map is an evolution of that original design, and the ideas are used by many metro systems around the world. The current standard tube map shows the Docklands Light Railway, London Overground, Emirates Air Line, London Tramlink and the London Underground; a more detailed map covering a larger area, published by National Rail and Transport for London, includes suburban railway services. The tube map came second in a BBC and London Transport Museum poll asking for a favourite UK design icon of the 20th century and the underground's 150th anniversary was celebrated by a Google Doodle on the search engine. While the first use of a roundel in a London transport context was the trademark of the London General Omnibus Company registered in 1905, it was first used on the Underground in 1908 when the UERL placed a solid red circle behind station nameboards on platforms to highlight the name. The word "UNDERGROUND" was placed in a roundel instead of a station name on posters in 1912 by Charles Sharland and Alfred France, as well as on undated and possibly earlier posters from the same period. Frank Pick, impressed by the Paris Metro, thought the solid red disc cumbersome and took a version where the disc became a ring from a 1915 Sharland poster and gave it to Edward Johnston to develop, and registered the symbol as a trademark in 1917. The roundel was first printed on a map cover using the Johnston typeface in June 1919, and printed in colour the following October. After the UERL was absorbed into the London Passenger Transport Board in 1933, it used forms of the roundel for buses, trams and coaches, as well as the Underground. The words "London Transport" were added inside the ring, above and below the bar. The Carr-Edwards report, published in 1938 as possibly the first attempt at a graphics standards manual, introduced stricter guidelines. Between 1948 and 1957 the word "Underground" in the bar was replaced by "London Transport". , forms of the roundel, with differing colours for the ring and bar, is used for other TfL services, such as London Buses, Tramlink, London Overground, London River Services and Docklands Light Railway. Crossrail will also be identified with a roundel. The 100th anniversary of the roundel was celebrated in 2008 by TfL commissioning 100 artists to produce works that celebrate the design. Seventy of the 270 London Underground stations use buildings that are on the Statutory List of Buildings of Special Architectural or Historic Interest, and five have entrances in listed buildings. The Metropolitan Railway's original seven stations were inspired by Italianate designs, with the platforms lit by daylight from above and by gas lights in large glass globes. Early District Railway stations were similar and on both railways the further from central London the station the simpler the construction. The City & South London Railway opened with red-brick buildings, designed by Thomas Phillips Figgis, topped with a lead-covered dome that contained the lift mechanism and weather vane (still visible at many stations e.g. Clapham Common. The Central London Railway appointed Harry Bell Measures as architect, who designed its pinkish-brown steel-framed buildings with larger entrances. In the first decade of the 20th century Leslie Green established a house style for the tube stations built by the UERL, which were clad in ox-blood faience blocks. Green pioneered using building design to guide passengers with direction signs on tiled walls, with the stations given a unique identity with patterns on the platform walls. Many of these tile patterns survive, though a significant number of these are now replicas. Harry W. Ford was responsible for the design of at least 17 UERL and District Railway stations, including Barons Court and Embankment, and claimed to have first thought of enlarging the U and D in the UNDERGROUND wordmark. The Met's architect Charles Walter Clark had used a neo-classical design for rebuilding Baker Street and Paddington Praed Street stations before World War I and, although the fashion had changed, continued with Farringdon in 1923. The buildings had metal lettering attached to pale walls. Clark would later design "Chiltern Court", the large, luxurious block of apartments at Baker Street, that opened in 1929. In the 1920s and 1930s, Charles Holden designed a series of modernist and art-deco stations some of which he described as his 'brick boxes with concrete lids'. Holden's design for the Underground's headquarters building at 55 Broadway included avant-garde sculptures by Jacob Epstein, Eric Gill and Henry Moore. When the Central line was extended east, the stations were simplified Holden proto-Brutalist designs, and a cavernous concourse built at Gants Hill in honour of early Moscow Metro stations. Few new stations were built in the 50 years after 1948, but Misha Black was appointed design consultant for the 1960s Victoria line, contributing to the line's uniform look, with each station having an individual tile motif. Notable stations from this period include Moor Park, the stations of the Piccadilly line extension to Heathrow and Hillingdon. The stations of the 1990s extension of the Jubilee line were much larger than before and designed in a high-tech style by architects such as Norman Foster and Michael Hopkins, making extensive use of exposed metal plating. West Ham station was built as a homage to the red brick tube stations of the 1930s, using brick, concrete and glass. Many platforms have unique interior designs to help passenger identification. The tiling at Baker Street incorporates repetitions of Sherlock Holmes's silhouette, at Tottenham Court Road semi-abstract mosaics by Eduardo Paolozzi feature musical instruments, tape machines and butterflies, and at Charing Cross, David Gentleman designed the mural depicting the construction of the Eleanor Cross. Robyn Denny designed the murals on the Northern line platforms at Embankment. The first posters used various type fonts, as was contemporary practice, and station signs used sans serif block capitals. The Johnston typeface was developed in upper and lower case in 1916, and a complete set of blocks, marked Johnston Sans, was made by the printers the following year. A bold version of the capitals was developed by Johnston in 1929. The Met changed to a serif letterform for its signs in the 1920s, used on the stations rebuilt by Clark. Johnston was adopted systemwide after the formation of the LPTB in 1933 and the LT wordmark was applied to locomotives and carriages. Johnston was redesigned, becoming New Johnston, for photo-typesetting in the early 1980s when Elichi Kono designed a range that included Light, Medium and Bold, each with its italic version. The typesetters P22 developed today's electronic version, sometimes called TfL Johnston, in 1997. Early advertising posters used various letter fonts. Graphic posters first appeared in the 1890s, and it became possible to print colour images economically in the early 20th century. The Central London Railway used colour illustrations in their 1905 poster, and from 1908 the Underground Group, under Pick's direction, used images of country scenes, shopping and major events on posters to encourage use of the tube. Pick found he was limited by the commercial artists the printers used, and so commissioned work from artists and designers such as Dora Batty, Edward McKnight Kauffer, the cartoonist George Morrow, Herry (Heather) Perry, Graham Sutherland, Charles Sharland and the sisters Anna and Doris Zinkeisen. According to Ruth Artmonsky, over 150 women artists were commissioned by Pick and latterly Christian Barman to design posters for London Underground, London Transport and London County Council Tramways. The Johnston Sans letter font began appearing on posters from 1917. The Met, strongly independent, used images on timetables and on the cover of its "Metro-land" guide that promoted the country it served for the walker, visitor and later the house-hunter. By the time London Transport was formed in 1933 the UERL was considered a patron of the arts and over 1000 works were commissioned in the 1930s, such as the cartoon images of Charles Burton and Kauffer's later abstract cubist and surrealist images. Harold Hutchison became London Transport publicity officer in 1947, after World War II and nationalisation, and introduced the "pair poster", where an image on a poster was paired with text on another. Numbers of commissions dropped, to eight a year in the 1950s and just four a year in the 1970s, with images from artists such Harry Stevens and Tom Eckersley. Art on the Underground was launched in 2000 to revive London Underground as a patron of the arts. Today, commissions range from the pocket tube map cover, to temporary art pieces, to large scale permanent installations in stations. Major commissions by Art on the Underground in recent years have included Labyrinth by Turner prize winning artist Mark Wallinger to mark the 150th anniversary of the London Underground, "Diamonds and Circles" permanent works "in situ" by French artist Daniel Buren at Tottenham Court Road and "Beauty < Immortality”, a memorial to Frank Pick by Langlands & Bell at Piccadilly Circus. Similarly, Poems on the Underground has commissioned poetry since 1986 that are displayed in carriages. The Underground (including several fictitious stations) has been featured in many movies and television shows, including "Skyfall", "Die Another Day", "Sliding Doors", "An American Werewolf in London", "Creep", "Tube Tales, Sherlock" and "Neverwhere". The London Underground Film Office received over 200 requests to film in 2000. The Underground has also featured in music such as The Jam's "Down in the Tube Station at Midnight" and in literature such as the graphic novel "V for Vendetta". Popular legends about the Underground being haunted persist to this day. In 2016, British composer Daniel Liam Glyn released his concept album Changing Stations based on the 11 main tube lines of the London Underground network. "" has a single-player level named "Mind The Gap" where most of the level takes place between the dockyards and Westminster while the player and a team of SAS attempt to take down terrorists attempting to escape using the London Underground via a hijacked train. The game also features the multiplayer map "Underground", in which players are combating in a fictitious Underground station. The London Underground map serves as a playing field for the conceptual game of Mornington Crescent (which is named after a station on the Northern line) and the board game "The London Game". The London Underground is frequently studied by academics because it is one of the largest, oldest, and most widely used systems of public transit in the world. Therefore, the transportation and complex network literatures include extensive information about the Tube system. For London Underground passengers, research suggests that transfers are highly costly in terms of walk and wait times. Because these costs are unevenly distributed across stations and platforms, path choice analyses may be helpful in guiding upgrades and choice of new stations. Routes on the Underground can also be optimized using a global network optimization approach, akin to routing algorithms for Internet applications. Analysis of the Underground as a network may also be helpful for setting safety priorities, since the stations targeted in the 2005 London bombings were amongst the most effective for disrupting the transportation system. Since its beginnings in the 19th century, the Tube network has evolved greatly.
https://en.wikipedia.org/wiki?curid=17839
Large technical system A large technical system (LTS) is a system or network of enormous proportions or complexity. The study of LTSs is a subdiscipline of history of science and technology. The book "Rescuing Prometheus" by Thomas P. Hughes documents the development of four such systems, including the Boston central artery tunnel and the Internet.
https://en.wikipedia.org/wiki?curid=17841
Lund University Lund University () is a university in Sweden and one of northern Europe’s oldest universities. The university is located in the city of Lund in the province of Scania, Sweden. It arguably traces its roots back to 1425, when a Franciscan studium generale was founded in Lund next to the Lund Cathedral. After Sweden won Scania from Denmark in the 1658 Treaty of Roskilde, the university was officially founded in 1666 on the location of the old studium generale next to Lund Cathedral. Lund University has eight faculties, with additional campuses in the cities of Malmö and Helsingborg, with 40,000 students in 270 different programmes and 1,300 freestanding courses. The university has some 600 partner universities in nearly 70 countries and it belongs to the League of European Research Universities as well as the global Universitas 21 network. Lund University is consistently ranked among the world's top 100 universities. Two major facilities for materials research are in Lund University: MAX IV, a synchrotron radiation laboratory – inaugurated in June 2016, and European Spallation Source (ESS), a new European facility that will provide up to 100 times brighter neutron beams than existing facilities today, to be opened in 2023. The university centers on the Lundagård park adjacent to the Lund Cathedral, with various departments spread in different locations in town, but mostly concentrated in a belt stretching north from the park connecting to the university hospital area and continuing out to the northeastern periphery of the town, where one finds the large campus of the Faculty of Engineering. The city of Lund has a long history as a center for learning and was the ecclesiastical centre and seat of the archbishop of Denmark. A cathedral school (the "Katedralskolan") for the training of clergy was established in 1085 and is today Scandinavia's oldest school. The university traces its roots back to 1425, when a Franciscan "studium generale" (a medieval university) was founded in Lund next to the Lund Cathedral (with baccalaureus degree started in 1438), making it the oldest institution of higher education in Scandinavia followed by studia generalia in Uppsala in 1477 and Copenhagen in 1479. After Sweden won Scania from Denmark in the 1658 Treaty of Roskilde, the university was founded in 1666 on the location of the old studium generale next to Lund Cathedral. The studium generale had not survived the Lutheran Reformation of 1536, which is why the university is considered a separate institution when founded in 1666. After the Treaty of Roskilde in 1658, the Scanian lands came under the possession of the Swedish Crown, which founded the University in 1666 as a means of making Scania Swedish by educating teachers in Swedish, and to culturally integrate the Scania region with Sweden. The university was named "Academia Carolina" after Charles X Gustav of Sweden until the late 19th century, when Lund University became the widespread denomination. It was the fifth university under the Swedish king, after Uppsala University (1477), the University of Tartu (1632, now in Estonia), the Academy of Åbo (1640, now in Finland), and the University of Greifswald (founded 1456; Swedish 1648–1815, now in Germany). The university was at its founding granted four faculties: Law, Theological, Medicine and Philosophy. They were the cornerstones, and for more than 200 years this system was in effect. Towards the end of the 17th century, the number of students hovered around 100. Some notable professors in the early days were Samuel Pufendorf, a juridical historian; and Canutus Hahn and Kristian Papke in philosophy. The Scanian War in 1676 led to a shut-down, which lasted until 1682. The university was re-opened largely due to regional patriots, but the university was not to enjoy a high status until well into the 19th century. Lecturing rooms were few, and lectures were held in the Lund Cathedral and its adjacent chapel. The professors were underpaid. In 1716, Charles XII of Sweden entered Lund. He stayed in Lund for two years, in between his warlike expeditions. Lund and the university attracted a temporary attention boost. The most notable lecturer during this time was Andreas Rydelius. Peace was finally restored with the death of Charles XII in 1718, and during the first half of the 18th century the university was granted added funds. The number of students was now well around 500. Despite not being on par with Uppsala University, it had still built a solid reputation and managed to attract prominent professors. Around 1760 the university reputation dropped as the number of students fell below 200, most of whom hailed from around the province. However, by 1780 its reputation was largely restored, and continued to rise through the 1820s. This was largely owing to popular and well-educated lecturers particularly in philology; the prominent professor Esaias Tegnér was a particularly notable character with widespread authority. He, in turn, attracted others towards Lund. One of these was the young theological student C. G. Brunius, who studied ancient languages under Tegnér and were later to become professor of Greek. With time he was to devote himself to architectures and he redesigned several of Lund's buildings, as well as churches of the province. In 1845 and 1862 Lund co-hosted Nordic student meetings together with the University of Copenhagen. A student called Elsa Collin was the first woman in the whole of Sweden to take part in a spex. In the early 20th century, the university had a student population as small as one thousand, consisting largely of upper-class pupils training to become civil servants, lawyers and doctors. In the coming decades it started to grow significantly, until it became one of the country's largest. In 1964 the social sciences were split from the Faculty of Humanities. Lund Institute of Technology was established in 1961 but was merged with Lund University eight years later. In recent years, Lund University has been very popular among applicants to Swedish higher education institutions, both nationally and internationally. For studies starting in autumn 2012, Lund received 11,160 foreign master's applications from 152 countries, which was roughly one third of all international applications to Swedish universities. The first woman to study in Lund was Hildegard Björck (spring of 1880) who had previously studied in Uppsala and had there been the first Swedish woman ever to get an academic degree. Her tenure in Lund was however very brief and the medical student Hedda Andersson who entered the university later in 1880 (two years before the next woman to do so) is usually mentioned as the first woman at Lund University. Hilma Borelius was the first woman who finished a doctorate in Lund, in 1910. The first woman to be appointed to a professor's chair was the historian Birgitta Odén (1965). In 1992 Boel Flodgren, Professor of Business Law, was appointed rector magnificus (or, strictly speaking, "rectrix magnifica") of Lund University. As such, she was the first woman to be the head of a European university. The university's facilities are mainly located in the small city of Lund in Scania, about 15 km away from central Malmö and 50 km from Copenhagen. The large student- and staff population makes an impact on the city, effectively making it a university town. Over a hundred university buildings scatter around town, most of them in an area covering more than 1 km², stretching towards the north-east from Lundagård park in the very centre of town. Buildings in and around Lundagård include the main building, Kungshuset, the Historical Museum and the Academic Society's headquarters. The main library building is located in a park 400 meters to the north, followed by the large hospital complex. Lund University has a satellite campus in nearby Malmö, Sweden's third largest city. The Faculty of Fine and Performing Arts' three academies: Malmö Art Academy, Malmö Academy of Music and Malmö Theatre Academy, are all located in Malmö. The city is also the location of Skåne University Hospital, where Lund University performs a considerable amount of research and medical training. Campus Helsingborg is, as the name suggests, located in the city of Helsingborg, almost 50 km from Lund. Opened in 2000, it consists of a building in the city centre, right next to the central train station and the harbour. Nearly 3,000 students are based on the campus. The Department of Service Management and the Department of Communication and Media are among those located at the campus in Helsingborg. Teaching and training at the School of Aviation (LUSA) takes place at an airfield next to the town of Ljungbyhed, about 40 km away from Lund. Lund University library was established in 1668 at the same time as the university and is one of Sweden's oldest and largest libraries. Since 1698 it has received legal deposit copies of everything printed in the country. Today six Swedish libraries receive legal deposit copies, but only Lund and the Royal Library in Stockholm are required to keep everything for posterity. Swedish imprints make up half of the collections, which amount to 170,000 linear metres of shelving (2006). The library serves 620,000 loans per year, the staff is 200 full-time equivalents, and the 33 branch libraries house 2600 reading room desks. The current main building at Helgonabacken opened in 1907. Before that, the old building was "Liberiet" close to the city's cathedral. Liberiet was built as a library in the 15th century, but now serves as a cafe. Education and research in the health sciences at the university is operated in cooperation with Skåne University Hospital, located in both Lund and Malmö. Medical education takes place in the Biomedical Centre, next to the hospital in Lund. Nursing and occupational therapy are taught in the Health Sciences Centre nearby. The university also operates the Clinical Research Centre in Malmö, featuring many specialized laboratories. There are over 100 faculty. The University Board is the University's highest decision-making body. The Board comprises the Vice-Chancellor, representatives of the teaching staff and students, and representatives of the community and business sector. Chair of the board is Margot Wallström. Executive power lies with the Vice-Chancellor and the University Management Group, to which most other administrative bodies are subordinate. Lund University is divided into eight faculties: The university is also organised into more than 20 institutes and research centres, such as: Approximately 42,000 students study within one of the 276 educational programs, the 100 international master's programmes or the 2,200 independent courses. Around five hundred courses are, or can be, held in English for the benefit of international exchange students. There are several programmes allowing foreign students to study abroad at the University. Notable exchangees include United States Supreme Court Justice Ruth Bader Ginsburg, who spent time at Lund University in the 1960s conducting research. The university offers 9 out of the 20 most sought after programmes in Sweden. The master's programme in International Marketing is the most popular choice in the country, with almost a thousand applications yearly. Students are awarded ECTS credits for all completed courses. Grading scales vary by programme and even course; "Pass/Fail" and "Pass with distinction/Pass/Fail" are most common, but ECTS grades are increasingly given as well. Engineering students generally receive grades as "5/4/3/Fail". Lund University is well known as one of Scandinavia's largest research universities. It ranks among top performers in the European Union in terms of papers accepted for publication in scientific journals. It is one of Sweden's top receiver of research grants, most of which come from government-funded bodies. The EU is the university's second largest external research funder and Lund is the 23rd largest receiver of funding within the union's Seventh Framework Programme. The university is active in many internationally important research areas such as nanotechnology, climate change and stem cell biology. One of the most famous innovations based on research from Lund University is diagnostic ultrasound, which is today a routine method of examination in hospitals around the world. Other examples of pioneering innovations are the artificial kidney, which laid the foundations for the multinational company Gambro and which makes life easier for dialysis patients worldwide, and Bluetooth technology, which enables wireless communication over short distances. Here is a sample selection of discoveries from Lund through the ages. Lund University is among the most renowned institutions of higher learning in the Nordic countries and is commonly ranked within the top 100 in the world by the most influential ranking agencies. Lund was ranked 60th in the world in the 2014/15 QS World University Rankings. In 2015, it ranked 90th in the world by the Times Higher Education World University Rankings, and in the Academic Ranking of World Universities, compiled by Shanghai Jiao Tong University, it was in the 101-150 category. In the Leiden Ranking for 2019, Lund University is ranked 65th in the world. In 2014, the National Secretary of Higher Education, Science, Technology and Innovation, Ecuador (Senescyt), as a parameter for awarding scholarships, created a list of top Universities around the world placing Lund University in Top 1 in "Natural Sciences, Mathematics and Statistics" and Top 3 in both fields: "Information Technology and Communication" and "Engineering, Manufacturing and Construction". The 2013–2014 Times Higher Education World University Rankings' Engineering and Technology table placing Lund University 78 around the world., in "Art and Humanities" 74th, in "Clinical, Pre-clinical and Health" 77th, in "Life Sciences" 67th, and in "Physical Sciences" 57th. The 2015–2016 URAP (University Ranking by Academic Performance) placed to Lund University 87th in the world with qualification A++. The 2015–16 Times Higher Education World University Rankings placed the university in 90th place worldwide. Lund student life is based on three central structures: the student nations, the Academic Society (AF) and the student unions. Before July 1, 2010, students were required to enroll in a student union, nation and AF in order to receive grades at the university, but this is no longer compulsory. Students may still enroll in these organizations if they wish. The nations in Lund are a central part of the university's history, initially serving as residential colleges for students, organized by geographic origin. Östgöta Nation, the oldest nation, was established in 1668, two years after the university was founded. While the nations still offer limited housing, today they are best described as student societies. Today students may enroll in any nation, although the nations still preserve their geographic names. In most cases it does not matter what nation one enrolls in, but different nations offer different activities for interested students. Each nation has student housing, but the accommodations in no way meet demand, and they are usually appointed according to a queue system. Each nation has at least one pub evening per week, with a following night club. The solemn peak event in the course of an activity year is the organization of student balls once a year. Most well known of the nation balls (as opposed to balls organized by student unions) is the ball hosted by Göteborgs Nation - called the "Gustaf II Adolf Ball" (also known as the "GA-Ball"). Most nations also host at least one banquet per week, where a three course dinner is served. Each nation also has different activities for students interested in sports, arts, or partying. All activities within the nations are voluntary. In 1830, Professor Carl Adolph Agardh formed "Akademiska Föreningen" (The Academic Society), commonly referred to as AF, with the goal of "developing and cultivating the academic life" by bringing students and faculty from all departments and student nations together in one organization. Prince Oscar, then Sweden's Chancellor of Education, donated 2000 Kronor to help found the society. In 1848, construction began on "AF-borgen" (the AF Fortress), which is located opposite the Main Building in Lundagård. To this day, AF is the center of student life in Lund, featuring many theater companies, a prize-winning student radio (Radio AF), and organizing the enormous "Lundakarnevalen" (the Lund Carnival) every four years. "AF Bostäder", an independent foundation with close ties to Akademiska Föreningen, maintains over 5,700 student residences in Lund. The student unions represent students in various decision-making boards within the university and council students regarding their rights, housing and career options. There are nine student unions, one for each faculty and an additional union for doctoral students. Lund's Doctoral Student Union is further divided into councils, one for each faculty except for the faculties of engineering and fine and performing arts. The unions are incorporated into the Association of Lund University Student Unions (LUS). It has two full-time representatives who go to weekly meetings with the vice-chancellor and other organizational university bodies. The student union association runs services such as loan institute, a day-care centre and a website with housing information. It also publishes the monthly Lundagård magazine. LINC - Lund University Finance Society, established in 1991, is the primary society for students interested in finance at Lund University. In 2019, LINC had a total of 1,800 members. LINC is the leading finance society in Sweden and one of the most prominent organizations of its kind in Northern Europe. The organization aims to provide members with a skillset to pursue a successful career in finance. LINC has a strong heritage, noteworthy alumni at top financial institutions, and a successful track record of organizing finance-related events including Investment Banking Forum. Alumni and faculty of Lund University are associated with, among other things: the creation of the first implantable pacemaker, the development of echocardiography, the spread of modern physiotherapy, the discovery of the role of dopamine as an independent neurotransmitter, the determination of the number of chromosomes of man, the establishment of osseointegration, the development of the Bluetooth technology, and the creation of the Rydberg formula. The following is a selected list of some notable people who have been affiliated with Lund University as students or academics. Samuel Pufendorf (1632–1694) was a notable jurist and philosopher known for his natural law theories, influencing Adam Smith as well as Thomas Jefferson. Olof von Dalin (1708–1763) was an influential Swedish writer and historian of the late enlightenment era. Peter Wieselgren (1800–1877) was a Swedish priest, literature critic and prominent leader of the Swedish temperance movement. Knut Wicksell (1851–1926) was an influential economist, sometimes considered one of the founders of modern macroeconomics. Oscar Olsson (1877–1950) was an important developer of self-education in Sweden and known as the father of study circles. Bertil Ohlin (1899–1979) received the Nobel Prize in economic sciences in 1977 for theories concerning international trade and capital, and was the leader of the Liberal's Peoples Party (Folkpartiet) for 23 years. Gunnar Jarring (1907–2002) was Sweden's ambassador in UN 1956–1958, and Sweden's ambassador in Washington DC 1958–1964. Britta Holmström (1911–1992) was the founder of Individuell Människohjälp (IM), a human rights organization with activities in 12 countries. Torsten Hägerstrand (1916–2004) was an internationally renowned geographer, considered the father of 'time geography' and receiver of the Lauréat Prix International de Géographie Vautrin Lud in 1992. Judith Wallerstein (1921–2012) was a renowned psychologist and internationally recognized authority on the effects of marriage and divorce on children and their parents. Carl Linnaeus (1707–1778), began his academic career in Lund by studying medicine and botany for a year before moving to Uppsala. He is known as the father of modern taxonomy, and is also considered one of the fathers of modern ecology. Pehr Henrik Ling (1776–1839) is considered the prime developer of natural gymnastics, the father of Swedish massage, and one of the most important contributors to the development and spread of modern physical therapy. Carl Adolph Agardh (1787–1859) made important contributions to the study of algae and played an important role as a politician in raising educational standards in Sweden. Elias Magnus Fries (1794–1878) was a notable botanist who played a prominent role in the creation of the modern taxonomy of mushrooms. Nils Alwall (1904–1986) was a pioneer in hemodialysis who constructed the first practical dialysis machine, commercialized by The Gambro Company. Rune Elmqvist (1906–1996) was a physician and medical engineer who developed the first implantable pacemaker as well as the first inkjet ECG printer. Lars Leksell (1907–1986) was a notable neurosurgeon who was the father of radiosurgery and later the inventor of the Gamma Knife. Inge Edler (1911–2001) developed the medical ultrasonography in 1953, commonly known as echocardiography, together with Hellmuth Hertz, and was awarded the Lasker Clinical Medical Research Award in 1977. Sune Bergström (1916–2004) and Bengt Samuelsson (1934–) were awarded the Nobel Prize in Physiology or Medicine in 1982 for "discoveries concerning prostaglandins and related biologically active substances". Arvid Carlsson (1923–) was awarded the Nobel Prize in Physiology or Medicine in 2000 for "discoveries concerning signal transduction in the nervous system" and is noted for having discovered the role of dopamine as an independent neurotransmitter. Per Georg Scheutz (1785–1873) was a Swedish lawyer, publicist and inventor who created the first working programmable difference engine with a printing unit. Martin Wiberg (1826–1905) was a prolific inventor who, among many things, created the first difference engine the size of a sewing machine that could calculate and print logarithmic tables. Johannes Rydberg (1854–1919) was a renowned physicist famous for the Rydberg formula and the Rydberg constant. Carl Charlier (1862–1934) was an internationally acclaimed astronomer who made important contributions to astronomy as well as statistics and was awarded the James Craig Watson Medal in 1924 and the Bruce Medal in 1933. Manne Siegbahn (1886–1978), a student of Rydberg, was awarded the Nobel Prize in Physics 1924 for his discoveries and research in the field of X-ray spectroscopy. Oskar Klein (1894–1977) was an internationally renowned theoretical physicist famous for the Klein-Kaluza theory, the Klein-Gordon equation, and the Klein-Nishina formula. Pehr Edman (1916–1977) was a renowned biochemist who developed a method for sequencing proteins, known as the Edman degradation, and has been called the father of modern biochemistry. Hellmuth Hertz (1920–1990) developed the echocardiography together with Inge Edler (see above), and was also the first to develop the ink jet technology of printing. Lars Hörmander (1931–2012) is sometimes considered the foremost contributor to the modern theory of linear partial differential equations and received the Fields Medal in 1962 for his early work on equations with constant coefficients. Karl Johan Åström (1934–) is a notable control theorist, who in 1993 was awarded the IEEE Medal of Honor for "fundamental contributions to theory and applications of adaptive control technology". Sven Mattisson (1955–) is an electrical engineer who was one of the developers of the Bluetooth technology. Rutger Macklean (1742–1816) was a prominent captain, politician and land owner remembered for introducing agricultural reforms leading to more effective large-scale farming in Sweden. Ernst Wigforss (1881–1977) was Sweden's finance minister 1925–1926 and 1932–1949 and has been considered the 'foremost developer of the Swedish Social Democracy'. Östen Undén (1886–1974) was an internationally recognized professor of law and Sweden's minister of foreign affairs 1924–1926 and 1945–1962. Tage Erlander (1901–1985) was Sweden's prime minister 1945–1969, potentially a record of uninterrupted tenure in parliamentary democracies, and led his party through eleven elections. Ruth Bader Ginsburg (1933–) is a justice of the supreme court of the USA, the second female justice to be in this position. Ingvar Carlsson (1934–) served as Sweden's prime minister 1986–1991 and 1994–1996 and as Sweden's deputy prime minister 1982–1986. Rupiah Banda (1937–) was the president of Zambia 2008–2011 and its vice president 2006–2008. Leif Silbersky (1938–) is a notable lawyer and author famous for representing so called high-profile cases in Sweden. Marianne Lundius (1949–) is since 2010 the president of the Supreme court of Sweden, the first female justice in this position. Utoni Nujoma (1952–) was Namibia's minister of foreign affairs 2010–2012 and is since 2012 the country's minister of justice. Thomas Thorild (1759–1808) was a notable Swedish writer, poet, and philosopher who, among many things, was an early proponent of gender equality. Esaias Tegnér (1782–1846) was an influential writer, poet, bishop and professor of the Greek language, perhaps most famous for his work Frithiofs Saga. Viktor Rydberg (1828–1895) was a notable journalist, writer and researcher, most famous for his works Tomten and Singoalla and regarded as one of Sweden's most important authors of the 19th century. Frans G Bengtsson (1894–1954) was a Swedish writer and poet famous for his novels The Long Ships (Röde Orm) which have been translated to at least 23 languages. Fritiof Nilsson Piraten (1895–1972) was a Swedish lawyer and popular author, known for his works Bombi Bitt och Jag and Bock i Örtagård. Hjalmar Gullberg (1898–1961) was a notable writer and poet who was also the head of the Swedish Radio Theatre 1936–1950. Ivar Harrie (1899–1973) was one of the founders of the newspaper Expressen, as well as its editor in chief 1944–1960. Hans Alfredsson (1931–2017) was a Swedish comedian, author and actor, sometimes regarded as the foremost representative of the so-called Lundahumorn (the humour from Lund). Agnes von Rosen was a bullfighter and stunt performer who spent most of her later years in Mexico. Axwell (Born as Axel Christofer Hedfors, 1977–) is a world-renowned DJ, perhaps best known as a member of the trio the Swedish House Maffia. Hans Rausing (1926–2019) was the managing director of Tetra Pak 1954–1985 and the company's chairman 1985–1993, and has been ranked as the third richest man in Sweden. Pehr G. Gyllenhammar (1935–) is a businessman who was the CEO and chairman of Volvo 1971–1983 and 1983–1993 respectively, the chairman of Procordia 1990–1992, Aviva 1998–2005, Investment AB Kinnevik 2004–2007, and is the current vice chairman of Rothschild Europe. Bertil Hult (1941–) founded EF Education from his dormitory in Lund and was the company's CEO until 2002 and chairman until 2008. Olof Stenhammar (1941–) is a Swedish financier and businessman who founded Optionsmäklarna, OM, that later changed its name to OMX and today is a part of the NASDAQ OMX Group. Michael Treschow (1943–) is the current chairman of Unilever and was the CEO of Atlas Copco and Elektrolux 1991–1998 and 1998–2002 respectively, as well as the chairman of Ericsson 2002–2011. Stefan Persson (1947–) was the CEO of H&M 1982–1997 and has been the company's chairman since 1998 and has been ranked among the top ten richest men in the world. Dan Olofsson (1950–) is a Swedish entrepreneur and philanthropist who founded the company Sigma and the foundation Star for Life and is a large shareholder in the company ÅF. Anders Dahlvig (1957–) was the CEO and President of the IKEA group between 1999 and 2009, during which IKEA experienced an average growth of 11 percent, and is the current chairman of the New Wave Group. Charlotta Falvin (1966–) is a Swedish businesswoman who is the chairman of the companies Teknopol, Barista, Multi-Q and Ideon AB and the previous CEO of TAT and Decuma. Ann-Sofie Johansson is the Creative Advisor and former Head of Design for fashion retailer H&M. Cristina Stenbeck (1977–) is a Swedish businesswoman who is the current chairman of Investment AB Kinnevik. Lund University cooperates with universities on all continents, both in areas of research and student exchange. Apart from being a member of the prestigious LERU and Universitas 21 networks, the university participates in the European Erasmus and Nordplus programmes. It also coordinates several intercontinental projects, mostly through the Erasmus Mundus programme.
https://en.wikipedia.org/wiki?curid=17843
Lord Peter Wimsey Lord Peter Death Bredon Wimsey is the fictional protagonist in a series of detective novels and short stories by Dorothy L. Sayers (and their continuation by Jill Paton Walsh). A dilettante who solves mysteries for his own amusement, Wimsey is an archetype for the British gentleman detective. Lord Peter is often assisted by his valet and former batman, Mervyn Bunter; his good friend and later brother-in-law, police detective Charles Parker; and in a few books by Harriet Vane, who becomes his wife. Born in 1890 and ageing in real time, Wimsey is described as being of average height, with straw-coloured hair, a beaked nose, and a vaguely foolish face. Reputedly his looks were patterned after those of academic and poet Roy Ridley, whom Sayers briefly met after witnessing him read his Newdigate Prize-winning poem "Oxford" at the Encaenia ceremony in July 1913. Wimsey also possessed considerable intelligence and athletic ability, evidenced by his playing cricket for Oxford University while earning a First. He created a spectacularly successful publicity campaign for Whifflet cigarettes while working for Pym's Publicity Ltd, and at age 40 was able to turn three cartwheels in the office corridor, stopping just short of the boss's open office door ("Murder Must Advertise"). Among Lord Peter's hobbies, in addition to criminology, is collecting incunabula, books from the earliest days of printing. He is an expert on matters of food (especially wine), male fashion, and classical music. He excels at the piano, including Bach's works for keyboard instruments. One of Lord Peter's cars is a 12-cylinder ("double-six") 1927 Daimler four-seater, which (like all his cars) he calls "Mrs Merdle" after a character in Charles Dickens's "Little Dorrit" who "hated fuss". Lord Peter Wimsey's ancestry begins with the 12th-century knight Gerald de Wimsey, who went with King Richard the Lionheart on the Third Crusade and took part in the Siege of Acre. This makes the Wimseys an unusually ancient family, since "Very few English noble families go that far in the first creation; rebellions and monarchic head choppings had seen to that", as reviewer Janet Hitchman noted in the introduction to "Striding Folly". The family coat of arms is blazoned as "Sable, 3 mice courant, argent; crest, a domestic cat couched as to spring, proper". The family motto, displayed under its coat of arms, is "As my Whimsy takes me." Lord Peter was the second of the three children of Mortimer Wimsey, 15th Duke of Denver, and Honoria Lucasta Delagardie, who lives on throughout the novels as the Dowager Duchess of Denver. She is witty and intelligent, and strongly supports her younger son, whom she plainly prefers over her less intelligent, more conventional older son Gerald, the 16th Duke. Gerald's snobbish wife, Helen, detests Peter. Gerald's son and heir is the devil-may-care Viscount St George. Lady Mary, the younger sister of the Duke and Lord Peter, leans strongly to the political left and scandalises much of her family by marrying a policeman of working-class origins. Lord Peter Wimsey is called "Lord" as he is the son of a Duke. This is a courtesy title so he is not a peer and has no right to sit in the House of Lords, nor does the title pass on to any offspring he may have. As a boy, the young Peter Wimsey was, to the great distress of his father, strongly attached to an old, smelly poacher living at the edge of the family estate. In his youth Lord Peter was influenced by his maternal uncle Paul Delagardie, who took it upon himself to instruct his nephew in the facts of life: how to conduct various love affairs and treat his lovers. Lord Peter was educated at Eton College and Balliol College, Oxford, graduating with a first-class degree in history. He was also an outstanding cricketer, whose performance was still well remembered decades later. Though not taking up an academic career, he was left with an enduring and deep love for Oxford. To his uncle's disappointment, Peter fell deeply in love with a young woman named Barbara and became engaged to her. When the First World War broke out, he hastened to join the British Army, releasing Barbara from her engagement in case he was killed or mutilated. The girl later married another, less principled officer. Wimsey served on the Western Front from 1914 to 1918, reaching the rank of Major in the Rifle Brigade. He was appointed an Intelligence Officer, and on one occasion he infiltrated the staff room of a German officer. Though not explicitly stated, that feat implies that Wimsey spoke a fluent and unaccented German. As noted in "Have His Carcase", he communicated at that time with British Intelligence using the Playfair cipher and became proficient in its use. For reasons never clarified, after the end of his spy mission, Wimsey in the later part of the war moved from Intelligence and resumed the role of a regular line officer. He was a conscientious and effective commanding officer, popular with the men under his command—an affection still retained by Wimsey's former soldiers many years after the war, as is evident from a short passage in "Clouds of Witness" and an extensive reminiscence in "Gaudy Night". In particular, while in the army he met Sergeant Mervyn Bunter, who had previously been in service. In 1918, Wimsey was wounded by artillery fire near Caudry in France. He suffered a breakdown due to shell shock (which we now call post-traumatic stress disorder but which was then often thought, by those without first-hand experience of it, to be a species of malingering) and was eventually sent home. While sharing this experience, which the Dowager Duchess referred to as "a jam", Wimsey and Bunter arranged that if they were both to survive the war, Bunter would become Wimsey's valet. Throughout the books, Bunter takes care to address Wimsey as "My Lord". Nevertheless, he is a friend as well as a servant, and Wimsey again and again expresses amazement at Bunter's high efficiency and competence in virtually every sphere of life. After the war he was ill for many months, recovering at the family's ancestral home in Duke's Denver, a fictional setting—as is the Dukedom of Denver—about 15 miles (24 km) beyond the real Denver in Norfolk, on the A10 near Downham Market. Wimsey was for a time unable to give servants any orders whatsoever, since his wartime experience made him associate the giving of an order with causing the death of the person to whom the order was given. Bunter arrived and, with the approval of the Dowager Duchess, took up his post as valet. Bunter moved Wimsey to a London flat at 110A Piccadilly, W1, while Wimsey recovered. Even much later, however, Wimsey would have relapses—especially when his actions caused a murderer to be hanged. As noted in "Whose Body?", on such occasions Bunter would take care of Wimsey and tenderly put him to bed, and they would revert to being "Major Wimsey" and "Sergeant Bunter". Lord Peter begins his hobby of investigation by recovering "The Attenbury Emeralds" in 1921. At the beginning of "Whose Body?" there appears the unpleasant Inspector Sugg, who is extremely hostile to Wimsey and tries to exclude him from the investigation (reminiscent of the relations between Sherlock Holmes and Inspector Lestrade). However, Wimsey is able to bypass Sugg through his friendship with Scotland Yard detective Charles Parker, a sergeant in 1921. At the end of "Whose Body?", Wimsey generously allows Sugg to take completely undeserved credit for the solution; the grateful Sugg cannot go on with his hostility to Wimsey. In later books, Sugg fades away and Wimsey's relations with the police become dominated by his amicable partnership with Parker, who eventually rises to the rank of Commander (and becomes Wimsey's brother in law). Bunter, a man of many talents himself, not least photography, often proves instrumental in Peter's investigations. However, Wimsey is not entirely well. At the end of the investigation in "Whose Body?" (1923) Wimsey hallucinates that he is back in the trenches. He soon recovers his senses and goes on a long holiday. The next year, he travels (in "Clouds of Witness", 1926) to the fictional Riddlesdale in North Yorkshire to assist his older brother Gerald, who has been accused of murdering Captain Denis Cathcart, their sister's fiancé. As Gerald is the Duke of Denver, he is tried by the entire House of Lords, as required by the law at that time, to much scandal and the distress of his wife Helen. Their sister, Lady Mary, also falls under suspicion. Lord Peter clears the Duke and Lady Mary, to whom Parker is attracted. As a result of the slaughter of men in the First World War, there was in the UK a considerable imbalance between the sexes. It is not exactly known when Wimsey recruited Miss Climpson to run an undercover employment agency for women, a means to garner information from the otherwise inaccessible world of spinsters and widows, but it is prior to "Unnatural Death" (1927), in which Miss Climpson assists Wimsey's investigation of the suspicious death of an elderly cancer patient. Wimsey's highly effective idea is that a male detective going around and asking questions is likely to arouse suspicion, while a middle-aged woman doing it would be dismissed as a gossip and people would speak openly to her. As recounted in the short story "The Adventurous Exploit of the Cave of Ali Baba", in December 1927 Wimsey fakes his own death, supposedly while hunting big game in Tanganyika, to penetrate and break up a particularly dangerous and well-organised criminal gang. Only Wimsey's mother and sister, the loyal Bunter and Inspector Parker know he is still alive. Emerging victorious after more than a year masquerading as "the disgruntled sacked servant Rogers", Wimsey remarks that "We shall have an awful time with the lawyers, proving that I am me." In fact, he returns smoothly to his old life, and the interlude is never referred to in later books. During the 1920s, Wimsey has affairs with various women, which are the subject of much gossip in Britain and Europe. This part of his life remains hazy: it is hardly ever mentioned in the books set in the same period; most of the scanty information on the subject is given in flashbacks from later times, after he meets Harriet Vane and relations with other women become a closed chapter. In "Busman's Honeymoon" Wimsey facetiously refers to a gentleman's duty "to remember whom he had taken to bed" so as not to embarrass his bedmate by calling her by the wrong name. There are several references to a relationship with a famous Viennese opera singer, and Bunter—who evidently was involved with this, as with other parts of his master's life—recalls Wimsey being very angry with a French mistress who mistreated her own servant. The only one of Wimsey's earlier women to appear in person is the artist Marjorie Phelps, who plays an important role in "The Unpleasantness at the Bellona Club". She has known Wimsey for years and is attracted to him, though it is not explicitly stated whether they were lovers. Wimsey likes her, respects her, and enjoys her company—but that is not enough. In "Strong Poison", she is the first person other than Wimsey himself to realise that he has fallen in love with Harriet. In "Strong Poison" Lord Peter encounters Harriet Vane, a cerebral, Oxford-educated mystery writer, while she is on trial for the murder of her former lover. He falls in love with her at first sight. Wimsey saves her from the gallows, but she believes that gratitude is not a good foundation for marriage, and politely but firmly declines his frequent proposals. Lord Peter encourages his friend and foil, Chief Inspector Charles Parker, to propose to his sister, Lady Mary Wimsey, despite the great difference in their rank and wealth. They marry and have a son, named Charles Peter ("Peterkin"), and a daughter, Mary Lucasta ("Polly"). While on a fishing holiday in Scotland, Wimsey takes part in the investigation of the murder of an artist, related in "Five Red Herrings". Despite the rejection of his marriage proposal, he continues to court Miss Vane. In "Have His Carcase", he finds Harriet is not in London, but learns from a reporter that she has discovered a corpse while on a walking holiday on England's south coast. Wimsey is at her hotel the next morning. He not only investigates the death and offers proposals of marriage, but also acts as Harriet's patron and protector from press and police. Despite a prickly relationship, they work together to identify the murderer. Back in London, Wimsey goes under cover as "Death Bredon" at an advertising firm, working as a copywriter ("Murder Must Advertise"). Bredon is framed for murder, leading Charles Parker to "arrest" Bredon for murder in front of numerous witnesses. To distinguish Death Bredon from Lord Peter Wimsey, Parker smuggles Wimsey out of the police station and urges him to get into the papers. Accordingly, Wimsey accompanies "a Royal personage" to a public event, leading the press to carry pictures of both "Bredon" and Wimsey. In 1934 (in "The Nine Tailors") Wimsey must unravel a 20-year-old case of missing jewels, an unknown corpse, a missing World War I soldier believed alive, a murderous escaped convict believed dead, and a mysterious code concerning church bells. By 1935 Lord Peter is in continental Europe, acting as an unofficial attaché to the British Foreign Office. Harriet Vane contacts him about a problem she has been asked to investigate in her college at Oxford ("Gaudy Night"). At the end of their investigation, Vane finally accepts Wimsey's proposal of marriage. The couple marry on 8 October 1935, at St Cross Church, Oxford, as depicted in the opening collection of letters and diary entries in "Busman's Honeymoon". The Wimseys honeymoon at Talboys, a house in east Hertfordshire near Harriet's childhood home, which Peter has bought for her as a wedding present. There they find the body of the previous owner, and spend their honeymoon solving the case, thus having the aphoristic "Busman's Honeymoon". Over the next five years, according to Sayers' short stories, the Wimseys have three sons: Bredon Delagardie Peter Wimsey (born in October 1936 in the story "The Haunted Policeman"); Roger Wimsey (born 1938), and Paul Wimsey (born 1940). However, according to the wartime publications of "The Wimsey Papers", published in "The Spectator", the second son was called Paul. In "The Attenbury Emeralds", Paul is again the second son and Roger is the third son. In the subsequent "The Late Scholar", Roger is not mentioned at all. It may be presumed that Paul is named after Lord Peter's maternal uncle Paul Delagardie. "Roger" is an ancestral Wimsey name. In Sayers's final Wimsey story, the 1942 short story "Talboys", Peter and Harriet are enjoying rural domestic bliss with their three sons when Bredon, their first-born, is accused of the theft of prize peaches from the neighbour's tree. Peter and the accused set off to investigate and, of course, prove Bredon's innocence. Wimsey is described as having authored numerous books, among them the following fictitious works: Dorothy Sayers wrote 11 Wimsey novels and a number of short stories featuring Wimsey and his family. Other recurring characters include Inspector Charles Parker, the family solicitor Mr Murbles, barrister Sir Impey Biggs, journalist Salcombe Hardy, and family friend and financial whiz the Honourable Freddy Arbuthnot, who finds himself entangled in the case in the first of the Wimsey books, "Whose Body?" (1923). Sayers wrote no more Wimsey murder mysteries, and only one story involving him, after the outbreak of World War II. In "The Wimsey Papers", a series of fictionalised commentaries in the form of mock letters between members of the Wimsey family published in "The Spectator", there is a reference to Harriet's difficulty in continuing to write murder mysteries at a time when European dictators were openly committing mass murders with impunity; this seems to have reflected Sayers' own wartime feeling. "The Wimsey Papers" included a reference to Wimsey and Bunter setting out during the war on a secret mission of espionage in Europe, and provide the ironic epitaph Wimsey writes for himself: "Here lies an anachronism in the vague expectation of eternity". The papers also incidentally show that in addition to his thorough knowledge of the classics of English literature, Wimsey is familiar—though in fundamental disagreement—with the works of Karl Marx, and well able to debate with Marxists on their home ground. The only occasion when Sayers returned to Wimsey was the 1942 short story "Talboys". The war at that time devastating Europe received only a single oblique reference. Though Sayers lived until 1957, she never again took up the Wimsey books after this final effort. In effect, rather than killing off her detective, as Conan Doyle unsuccessfully tried with his, Sayers pensioned Wimsey off to a happy, satisfying old age. Thus, Peter Wimsey remained forever fixed on the background of inter-war England, and the books are nowadays often read for their evocation of that period as much as for the intrinsic detective mysteries. It was left to Jill Paton Walsh to extend Wimsey's career through and beyond the Second World War. In the continuations "Thrones, Dominations", "A Presumption of Death", "The Attenbury Emeralds", and "The Late Scholar", Harriet lives with the children at Talboys, Wimsey and Bunter have returned successfully from their secret mission in 1940, and his nephew Lord St. George is killed while serving as an RAF pilot in the Battle of Britain. Consequently, when Wimsey's brother dies of a heart attack in 1951 during a fire in Bredon Hall at Duke's Denver, Wimsey becomes the Duke of Denver. His Grace is then drawn into a mystery at an Oxford college. In "How I Came to Invent the Character of Lord Peter Wimsey," Sayers wrote: Janet Hitchman, in the preface to "Striding Folly", remarks that "Wimsey may have been the sad ghost of a wartime lover(...). Oxford, as everywhere in the country, was filled with bereaved women, but it may have been more noticeable in university towns where a whole year's intake could be wiped out in France in less than an hour." There is, however, no verifiable evidence of any such World War I lover of Sayers on whom the character of Wimsey might be based. Another theory is that Wimsey was based, at least in part, on Eric Whelpton, who was a close friend of Sayers at Oxford. Ian Carmichael, who played the part of Wimsey in the first BBC television adaptation and studied the character and the books thoroughly, said that the character was Sayers' conception of the 'ideal man', based in part on her earlier romantic misfortunes. Many episodes in the Wimsey books express a mild satire on the British class system, in particular in depicting the relationship between Wimsey and Bunter. The two of them are clearly the best and closest of friends, yet Bunter is invariably punctilious in using "my lord" even when they are alone, and "his lordship" in company. In a brief passage written from Bunter's point of view in "Busman's Honeymoon" Bunter is seen, even in the privacy of his own mind, to be thinking of his employer as "His Lordship". Wimsey and Bunter even mock the Jeeves and Wooster relationship. In "Whose Body?", when Wimsey is caught by a severe recurrence of his First World War shell-shock and nightmares and being taken care of by Bunter, the two of them revert to being "Major Wimsey" and "Sergeant Bunter". In that role, Bunter, sitting at the bedside of the sleeping Wimsey, is seen to mutter affectionately, "Bloody little fool!" In "The Vindictive Story of the Footsteps That Ran", the staunchly democratic Dr Hartman invites Bunter to sit down to eat together with himself and Wimsey, at the doctor's modest apartment. Wimsey does not object, but Bunter strongly does: "If I may state my own preference, sir, it would be to wait upon you and his lordship in the usual manner". Whereupon Wimsey remarks: "Bunter likes me to know my place". At the conclusion of "Strong Poison", Inspector Parker asks "What would one naturally do if one found one's water-bottle empty?" (a point of crucial importance in solving the book's mystery). Wimsey promptly answers, "Ring the bell." Whereupon Miss Murchison, the indefatigable investigator employed by Wimsey for much of this book, comments "Or, if one wasn't accustomed to be waited on, one might use the water from the bedroom jug." George Orwell was highly critical of this aspect of the Wimsey books: "... Even she [Sayers] is not so far removed from "Peg's Paper" as might appear at a casual glance. It is, after all, a very ancient trick to write novels with a lord for a hero. Where Miss Sayers has shown more astuteness than most is in perceiving that you can carry that kind of thing off a great deal better if you pretend to treat it as a joke. By being, on the surface, a little ironical about Lord Peter Wimsey and his noble ancestors, she is enabled to lay on the snobbishness ('his lordship' etc.) much thicker than any overt snob would dare to do". In fact, Sayers took the trouble to make the character halfway plausible by having his manner result from the stress of fighting in the Great War (which included an episode of being buried alive). Wimsey was not like that before the War, but afterward attempted to cope with his haunting memories by adopting “a mask of impenetrable frivolity”. Thus, it is Wimsey himself who is laying it on thick, since the character requires that type of mockery, either of himself or of public perceptions of his class. In 1935, the British film "The Silent Passenger" was released, in which Lord Peter, played by well-known comic actor Peter Haddon, solved a mystery on the boat train crossing the English Channel. Sayers disliked the film and James Brabazon describes it as an "oddity, in which Dorothy's contribution was altered out of all recognition." The novel "Busman's Honeymoon" was originally a stage play by Sayers and her friend Muriel St. Clare Byrne. A 1940 film of "Busman's Honeymoon" (US: "The Haunted Honeymoon"), starring Robert Montgomery and Constance Cummings as Lord and Lady Peter was released (with Seymour Hicks as Bunter), but the characters and events bore little resemblance to Sayers's writing, and she refused even to see the film. A more faithful BBC television version of the play, with Harold Warrender as Lord Peter, was transmitted live on 2 October 1947. A second live BBC version was broadcast on 3 October 1957, with Peter Gray as Wimsey. Edward Petherbridge played Lord Peter for BBC Television in 1987, in which three of the four major Wimsey/Vane novels ("Strong Poison", "Have His Carcase" and "Gaudy Night") were dramatised under the umbrella title "A Dorothy L. Sayers Mystery". Harriet Vane was played by Harriet Walter and Bunter was played by Richard Morant. The BBC was unable to secure the rights to turn "Busman's Honeymoon" into a proposed fourth and last part of the planned 13-episode series, so the series was produced as ten episodes. Both the 1970s productions and the 1987 series are now available on videotape and DVD. Edward Petherbridge also played Wimsey in the UK production of the "Busman's Honeymoon" play staged at the Lyric Hammersmith and on tour in 1988, with the role of Harriet being taken by his real-life spouse, Emily Richard. Both sets of adaptations were critically successful, with both Carmichael and Petherbridge's respective performances being widely praised, however the two portrayals are quite different from one another: Carmichael's Peter is eccentric, jolly and foppish with occasional glimpses of the inner wistful, romantic soul, whereas Petherbridge's portrayal was more calm, solemn and had a stiff upper lip, subtly downplaying many of the character's eccentricities. Ian Carmichael returned as Lord Peter in radio adaptations of the novels made by the BBC, all of which have been available on cassette and CD from the BBC Radio Collection. These co-starred Peter Jones as Bunter. In the original series, which ran on Radio 4 from 1973–83, no adaptation was made of the seminal "Gaudy Night", perhaps because the leading character in this novel is Harriet and not Peter; this was corrected in 2005 when a version specially recorded for the BBC Radio Collection was released starring Carmichael and Joanna David. The CD also includes a panel discussion on the novel, the major participants in which are P. D. James and Jill Paton Walsh. "Gaudy Night" was released as an unabridged audio book read by Ian Carmichael in 1993. Gary Bond starred as Lord Peter Wimsey and John Cater as Bunter in two single-episode BBC Radio 4 adaptations: "The Nine Tailors" on 25 December 1986 and "Whose Body" on 26 December 1987. With year of first publication In addition there are Lord Peter Wimsey has also been included by the science fiction writer Philip José Farmer as a member of the Wold Newton family; and Laurie R. King's detective character Mary Russell meets up with Lord Peter at a party in the novel "A Letter of Mary".
https://en.wikipedia.org/wiki?curid=17844
Letter (message) A letter is a written message conveyed from one person to another person through a medium. Letters can be formal and informal. Besides a means of communication and a store of information, letter writing has played a role in the reproduction of writing as an art throughout history. Letters have been sent since antiquity and are mentioned in the "Iliad". Historians Herodotus and Thucydides mention and utilize letters in their writings. Historically, letters have existed from the time of ancient India, ancient Egypt and Sumer, through Rome, Greece and China, up to the present day. During the seventeenth and eighteenth century, letters were used to self-educate. Letters were a way to practice critical reading, self-expressive writing, polemical writing and also exchange ideas with like-minded others. For some people, letters were seen as a written performance. For others, it was not only seen as a performance but also as a way of communication and a method of gaining feedback. Letters make up several of the books of the Bible. Archives of correspondence, whether for personal, diplomatic, or business reasons, serve as primary sources for historians. At certain times, the writing of letters was thought to be an art form and a genre of literature, for instance in Byzantine epistolography. In the ancient world letters were written on a various different materials, including metal, lead, wax-coated wooden tablets, pottery fragments, animal skin, and papyrus. From Ovid, we learn that Acontius used an apple for his letter to Cydippe. As communication technology has diversified, posted letters have become less important as a routine form of communication. For example, the development of the telegraph drastically shortened the time taken to send a communication, by sending it between distant points as an electrical signal. At the telegraph office closest to the destination, the signal was converted back into writing on paper and delivered to the recipient. The next step was the telex which avoided the need for local delivery. Then followed the fax (facsimile) machine: a letter could be transferred from the sender to the receiver through the telephone network as an image. These technologies did not displace physical letters as the primary route for communication, however today, the internet, by means of email, plays the main role in written communications; however, these email communications are not generally referred to as letters but rather as e-mail (or email) messages, messages or simply emails or e-mails, with only the term "letter" generally being reserved for communications on paper. Due to the timelessness and universality of letter writing, there is a wealth of letters and instructional materials (for example, manuals, as in the medieval ars dictaminis) on letter writing throughout history. The study of letter writing usually involves both the study of rhetoric and grammar. Despite email, letters are still popular, particularly in business and for official communications. At the same time, many "letters" are sent in electronic form. Nevertheless, frequently, the following arguments are put forth saying letters may have the advantages over email: Here is how a letter gets from the sender to the recipient: This process, depending on how far the sender is from the recipient, can take anywhere from a day to 3–4 weeks. International mail is sent via trains and airplanes to other countries. However, in 2008, Janet Barrett from the UK, received a RSVP to a party invitation addressed to 'Percy Bateman', from 'Buffy', originally posted on 29 November 1919. It had taken 89 years to be delivered by the Royal Mail. However, Royal Mail denied this, saying that it would be impossible for a letter to have remained in their system for so long, as checks are carried out regularly. Instead, the letter dated 1919 may have "been a collector's item which was being sent in another envelope and somehow came free of the outer packaging". There are a number of different types of letter:
https://en.wikipedia.org/wiki?curid=17845
Lesbian A lesbian is a homosexual woman. The word "lesbian" is also used for women in relation to their sexual identity or sexual behavior, regardless of sexual orientation, or as an adjective to characterize or associate nouns with female homosexuality or same-sex attraction. The concept of "lesbian" to differentiate women with a shared sexual orientation evolved in the 20th century. Throughout history, women have not had the same freedom or independence as men to pursue homosexual relationships, but neither have they met the same harsh punishment as homosexual men in some societies. Instead, lesbian relationships have often been regarded as harmless, unless a participant attempts to assert privileges traditionally enjoyed by men. As a result, little in history was documented to give an accurate description of how female homosexuality was expressed. When early sexologists in the late 19th century began to categorize and describe homosexual behavior, hampered by a lack of knowledge about homosexuality or women's sexuality, they distinguished lesbians as women who did not adhere to female gender roles and incorrectly designated them mentally ill—a designation which has been reversed in the global scientific community. Women in homosexual relationships responded to this designation either by hiding their personal lives or accepting the label of outcast and creating a subculture and identity that developed in Europe and the United States. Following World War II, during a period of social repression when governments actively persecuted homosexuals, women developed networks to socialize with and educate each other. Greater economic and social freedom allowed them gradually to be able to determine how they could form relationships and families. With second wave feminism and the growth of scholarship in women's history and sexuality in the 20th century, the definition of "lesbian" broadened, sparking a debate about sexual desire as the major component to define what a lesbian is. Some women who engage in same-sex sexual activity may reject not only identifying as lesbians but as bisexual as well, while other women's self-identification as lesbian may not align with their sexual orientation or sexual behavior. Sexual identity is not necessarily the same as one's sexual orientation or sexual behavior, due to various reasons, such as the fear of identifying their sexual orientation in a homophobic setting. Portrayals of lesbians in the media suggest that society at large has been simultaneously intrigued and threatened by women who challenge feminine gender roles, as well as fascinated and appalled with women who are romantically involved with other women. Women who adopt a lesbian identity share experiences that form an outlook similar to an ethnic identity: as homosexuals, they are unified by the heterosexist discrimination and potential rejection they face from their families, friends, and others as a result of homophobia. As women, they face concerns separate from men. Lesbians may encounter distinct physical or mental health concerns arising from discrimination, prejudice, and minority stress. Political conditions and social attitudes also affect the formation of lesbian relationships and families in open. The word "lesbian" is derived from the name of the Greek island of Lesbos, home to the 6th-century BCE poet Sappho. From various ancient writings, historians gathered that a group of young women were left in Sappho's charge for their instruction or cultural edification. Little of Sappho's poetry survives, but her remaining poetry reflects the topics she wrote about: women's daily lives, their relationships, and rituals. She focused on the beauty of women and proclaimed her love for girls. Before the mid-19th century, the word "lesbian" referred to any derivative or aspect of Lesbos, including a type of wine. In Algernon Charles Swinburne's 1866 poem "Sapphics", the term "lesbian" appears twice but capitalized both times after twice mentioning the island of Lesbos, and so could be construed to mean 'from the island of Lesbos'. In 1875, George Saintsbury, in writing about Baudelaire's poetry, refers to his "Lesbian studies" in which he includes his poem about "the passion of Delphine" which is a poem simply about love between two women which does not mention the island of Lesbos, though the other poem alluded to, entitled "Lesbos", does. Use of the word "lesbianism" to describe erotic relationships between women had been documented in 1870. In 1890, the term "lesbian" was used in a medical dictionary as an adjective to describe tribadism (as "lesbian love"). The terms "lesbian", "invert" and "homosexual" were interchangeable with "sapphist" and "sapphism" around the turn of the 20th century. The use of "lesbian" in medical literature became prominent; by 1925, the word was recorded as a noun to mean the female equivalent of a sodomite. The development of medical knowledge was a significant factor in further connotations of the term "lesbian." In the middle of the 19th century, medical writers attempted to establish ways to identify male homosexuality, which was considered a significant social problem in most Western societies. In categorizing behavior that indicated what was referred to as "inversion" by German sexologist Magnus Hirschfeld, researchers categorized what was normal sexual behavior for men and women, and therefore to what extent men and women varied from the "perfect male sexual type" and the "perfect female sexual type". Far less literature focused on female homosexual behavior than on male homosexuality, as medical professionals did not consider it a significant problem. In some cases, it was not acknowledged to exist. However, sexologists Richard von Krafft-Ebing from Germany, and Britain's Havelock Ellis wrote some of the earliest and more enduring categorizations of female same-sex attraction, approaching it as a form of insanity (Ellis' categorization of "lesbianism" as a medical problem is now discredited). Krafft-Ebing, who considered lesbianism (what he termed "Uranism") a neurological disease, and Ellis, who was influenced by Krafft-Ebing's writings, disagreed about whether sexual inversion was generally a lifelong condition. Ellis believed that many women who professed love for other women changed their feelings about such relationships after they had experienced marriage and a "practical life". However, Ellis conceded that there were "true inverts" who would spend their lives pursuing erotic relationships with women. These were members of the "third sex" who rejected the roles of women to be subservient, feminine, and domestic. "Invert" described the opposite gender roles, and also the related attraction to women instead of men; since women in the Victorian period were considered unable to initiate sexual encounters, women who did so with other women were thought of as possessing masculine sexual desires. The work of Krafft-Ebing and Ellis was widely read, and helped to create public consciousness of female homosexuality. The sexologists' claims that homosexuality was a congenital anomaly were generally well-accepted by homosexual men; it indicated that their behavior was not inspired by nor should be considered a criminal vice, as was widely acknowledged. In the absence of any other material to describe their emotions, homosexuals accepted the designation of different or perverted, and used their outlaw status to form social circles in Paris and Berlin. "Lesbian" began to describe elements of a subculture. Lesbians in Western cultures in particular often classify themselves as having an identity that defines their individual sexuality, as well as their membership to a group that shares common traits. Women in many cultures throughout history have had sexual relations with other women, but they rarely were designated as part of a group of people based on whom they had physical relations with. As women have generally been political minorities in Western cultures, the added medical designation of homosexuality has been cause for the development of a subcultural identity. The notion that sexual activity between women is necessary to define a lesbian or lesbian relationship continues to be debated. According to feminist writer Naomi McCormick, women's sexuality is constructed by men, whose primary indicator of lesbian sexual orientation is sexual experience with other women. The same indicator is not necessary to identify a woman as heterosexual, however. McCormick states that emotional, mental, and ideological connections between women are as important or more so than the genital. Nonetheless, in the 1980s, a significant movement rejected the desexualization of lesbianism by cultural feminists, causing a heated controversy called the feminist sex wars. Butch and femme roles returned, although not as strictly followed as they were in the 1950s. They became a mode of chosen sexual self-expression for some women in the 1990s. Once again, women felt safer claiming to be more sexually adventurous, and sexual flexibility became more accepted. The focus of this debate often centers on a phenomenon named by sexologist Pepper Schwartz in 1983. Schwartz found that long-term lesbian couples report having less sexual contact than heterosexual or homosexual male couples, calling this lesbian bed death. However, lesbians dispute the study's definition of sexual contact, and introduced other factors such as deeper connections existing between women that make frequent sexual relations redundant, greater sexual fluidity in women causing them to move from heterosexual to bisexual to lesbian numerous times through their lives—or reject the labels entirely. Further arguments attested that the study was flawed and misrepresented accurate sexual contact between women, or sexual contact between women has increased since 1983 as many lesbians find themselves freer to sexually express themselves. More discussion on gender and sexual orientation identity has affected how many women label or view themselves. Most people in western culture are taught that heterosexuality is an innate quality in all people. When a woman realizes her romantic and sexual attraction to another woman, it may cause an "existential crisis"; many who go through this adopt the identity of a lesbian, challenging what society has offered in stereotypes about homosexuals, to learn how to function within a homosexual subculture. Lesbians in western cultures generally share an identity that parallels those built on ethnicity; they have a shared history and subculture, and similar experiences with discrimination which has caused many lesbians to reject heterosexual principles. This identity is unique from gay men and heterosexual women, and often creates tension with bisexual women. One point of contention are lesbians who have had sex with men, while lesbians who have never had sex with men may be referred to as "gold star lesbians". Those who have had sex with men may face ridicule from other lesbians or identity challenges with regard to defining what it means to be a lesbian. Researchers, including social scientists, state that often behavior and identity do not match: women may label themselves heterosexual but have sexual relations with women, self-identified lesbians may have sex with men, or women may find that what they considered an immutable sexual identity has changed over time. A 2001 article on differentiating lesbians for medical studies and health research suggested identifying lesbians using the three characteristics of identity only, sexual behavior only, or both combined. The article declined to include desire or attraction as it rarely has bearing on measurable health or psychosocial issues. Researchers state that there is no standard definition of "lesbian" because "[t]he term has been used to describe women who have sex with women, either exclusively or in addition to sex with men (i.e., "behavior"); women who self-identify as lesbian (i.e., "identity"); and women whose sexual preference is for women (i.e., "desire" or "attraction")" and that "[t]he lack of a standard definition of lesbian and of standard questions to assess who is lesbian has made it difficult to clearly define a population of lesbian women". How and where study samples were obtained can also affect the definition. The varied meanings of "lesbian" since the early 20th century have prompted some historians to revisit historic relationships between women before the wide usage of the word was defined by erotic proclivities. Discussion from historians caused further questioning of what qualifies as a lesbian relationship. As lesbian-feminists asserted, a sexual component was unnecessary in declaring oneself a lesbian if the primary and closest relationships were with women. When considering past relationships within appropriate historic context, there were times when love and sex were separate and unrelated notions. In 1989, an academic cohort named the Lesbian History Group wrote: Because of society's reluctance to admit that lesbians exist, a high degree of certainty is expected before historians or biographers are allowed to use the label. Evidence that would suffice in any other situation is inadequate here... A woman who never married, who lived with another woman, whose friends were mostly women, or who moved in known lesbian or mixed gay circles, may well have been a lesbian. ... But this sort of evidence is not 'proof'. What our critics want is incontrovertible evidence of sexual activity between women. This is almost impossible to find. Female sexuality is often not adequately represented in texts and documents. Until very recently, much of what has been documented about women's sexuality has been written by men, in the context of male understanding, and relevant to women's associations to men—as their wives, daughters, or mothers, for example. Often artistic representations of female sexuality suggest trends or ideas on broad scales, giving historians clues as to how widespread or accepted erotic relationships between women were. History is often analyzed with contemporary ideologies; ancient Greece as a subject enjoyed popularity by the ruling class in Britain during the 19th century. Based on their social priorities, British scholars interpreted ancient Greece as a westernized, white, and masculine society, and essentially removed women from historical importance. Women in Greece were sequestered with each other, and men with men. In this homosocial environment, erotic and sexual relationships between males were common and recorded in literature, art, and philosophy. Hardly anything is recorded about homosexual activity between women. There is some speculation that similar relationships existed between women and girls. The poet Alcman used the term "aitis," as the feminine form of "aites"—which was the official term for the younger participant in a pederastic relationship. Aristophanes, in Plato's "Symposium", mentions women who love women, but uses the term "trepesthai" (to be focused on) instead of "eros", which was applied to other erotic relationships between men, and between men and women. Historian Nancy Rabinowitz argues that ancient Greek red vase images portraying women with their arms around another woman's waist, or leaning on a woman's shoulders can be construed as expressions of romantic desire. Much of the daily lives of women in ancient Greece is unknown, specifically their expressions of sexuality. Although men participated in pederastic relationships outside marriage, there is no clear evidence that women were allowed or encouraged to have same-sex relationships before or during marriage as long as their marital obligations were met. Women who appear on Greek pottery are depicted with affection, and in instances where women appear only with other women, their images are eroticized: bathing, touching one another, with dildos placed in and around such scenes, and sometimes with imagery also seen in depictions of heterosexual marriage or pederastic seduction. Whether this eroticism is for the viewer or an accurate representation of life is unknown. Women in ancient Rome were similarly subject to men's definitions of sexuality. Modern scholarship indicates that men viewed female homosexuality with hostility. They considered women who engaged in sexual relations with other women to be biological oddities that would attempt to penetrate women—and sometimes men—with "monstrously enlarged" clitorises. According to scholar James Butrica, lesbianism "challenged not only the Roman male's view of himself as the exclusive giver of sexual pleasure but also the most basic foundations of Rome's male-dominated culture". No historical documentation exists of women who had other women as sex partners. Female homosexuality has not received the same negative response from religious or criminal authorities as male homosexuality or adultery has throughout history. Whereas sodomy between men, men and women, and men and animals was punishable by death in Britain, acknowledgment of sexual contact between women was nonexistent in medical and legal texts. The earliest law against female homosexuality appeared in France in 1270. In Spain, Italy, and the Holy Roman Empire, sodomy between women was included in acts considered unnatural and punishable by burning to death, although few instances are recorded of this taking place. The earliest such execution occurred in Speier, Germany, in 1477. Forty days' penance was demanded of nuns who "rode" each other or were discovered to have touched each other's breasts. An Italian nun named Sister Benedetta Carlini was documented to have seduced many of her sisters when possessed by a Divine spirit named "Splenditello"; to end her relationships with other women, she was placed in solitary confinement for the last 40 years of her life. Female homoeroticism, however, was so common in English literature and theater that historians suggest it was fashionable for a period during the Renaissance. Ideas about women's sexuality were linked to contemporary understanding of female physiology. The vagina was considered an inward version of the penis; where nature's perfection created a man, often nature was thought to be trying to right itself by prolapsing the vagina to form a penis in some women. These sex changes were later thought to be cases of hermaphrodites, and hermaphroditism became synonymous with female same-sex desire. Medical consideration of hermaphroditism depended upon measurements of the clitoris; a longer, engorged clitoris was thought to be used by women to penetrate other women. Penetration was the focus of concern in all sexual acts, and a woman who was thought to have uncontrollable desires because of her engorged clitoris was called a "tribade" (literally, one who rubs). Not only was an abnormally engorged clitoris thought to create lusts in some women that led them to masturbate, but pamphlets warning women about masturbation leading to such oversized organs were written as cautionary tales. For a while, masturbation and lesbian sex carried the same meaning. Class distinction, however, became linked as the fashion of female homoeroticism passed. Tribades were simultaneously considered members of the lower class trying to ruin virtuous women, and representatives of an aristocracy corrupt with debauchery. Satirical writers began to suggest that political rivals (or more often, their wives) engaged in tribadism in order to harm their reputations. Queen Anne was rumored to have a passionate relationship with Sarah Churchill, Duchess of Marlborough, her closest adviser and confidante. When Churchill was ousted as the queen's favorite, she purportedly spread allegations of the queen having affairs with her bedchamberwomen. Marie Antoinette was also the subject of such speculation for some months between 1795 and 1796. Hermaphroditism appeared in medical literature enough to be considered common knowledge, although cases were rare. Homoerotic elements in literature were pervasive, specifically the masquerade of one gender for another to fool an unsuspecting woman into being seduced. Such plot devices were used in Shakespeare's "Twelfth Night" (1601), "The Faerie Queene" by Edmund Spenser in 1590, and James Shirley's "The Bird in a Cage" (1633). Cases during the Renaissance of women taking on male personae and going undetected for years or decades have been recorded, though whether these cases would be described as transvestism by homosexual women, or in contemporary sociology characterised as transgender, is debated and depends on the individual details of each case. If discovered, punishments ranged from death, to time in the pillory, to being ordered never to dress as a man again. Henry Fielding wrote a pamphlet titled "The Female Husband" in 1746, based on the life of Mary Hamilton, who was arrested after marrying a woman while masquerading as a man, and was sentenced to public whipping and six months in jail. Similar examples were procured of Catharine Linck in Prussia in 1717, executed in 1721; Swiss Anne Grandjean married and relocated with her wife to Lyons, but was exposed by a woman with whom she had had a previous affair and sentenced to time in the stocks and prison. Queen Christina of Sweden's tendency to dress as a man was well known during her time, and excused because of her noble birth. She was brought up as a male and there was speculation at the time that she was a hermaphrodite. Even after Christina abdicated the throne in 1654 to avoid marriage, she was known to pursue romantic relationships with women. Some historians view cases of cross-dressing women to be manifestations of women seizing power they would naturally be unable to enjoy in feminine attire, or their way of making sense out of their desire for women. Lillian Faderman argues that Western society was threatened by women who rejected their feminine roles. Catharine Linck and other women who were accused of using dildos, such as two nuns in 16th century Spain executed for using "material instruments", were punished more severely than those who did not. Two marriages between women were recorded in Cheshire, England, in 1707 (between Hannah Wright and Anne Gaskill) and 1708 (between Ane Norton and Alice Pickford) with no comment about both parties being female. Reports of clergymen with lax standards who performed weddings—and wrote their suspicions about one member of the wedding party—continued to appear for the next century. Outside Europe, women were able to dress as men and go undetected. Deborah Sampson fought in the American Revolution under the name Robert Shurtlieff, and pursued relationships with women. Edward De Lacy Evans was born female in Ireland, but took a male name during the voyage to Australia and lived as a man for 23 years in Victoria, marrying three times. Percy Redwood created a scandal in New Zealand in 1909 when she was found to be Amy Bock, who had married a woman from Port Molyneaux; newspapers argued whether it was a sign of insanity or an inherent character flaw. During the 17th through 19th centuries, a woman expressing passionate love for another woman was fashionable, accepted, and encouraged. These relationships were termed romantic friendships, Boston marriages, or "sentimental friends", and were common in the U.S., Europe, and especially in England. Documentation of these relationships is possible by a large volume of letters written between women. Whether the relationship included any genital component was not a matter for public discourse, but women could form strong and exclusive bonds with each other and still be considered virtuous, innocent, and chaste; a similar relationship with a man would have destroyed a woman's reputation. In fact, these relationships were promoted as alternatives to and practice for a woman's marriage to a man. One such relationship was between Lady Mary Wortley Montagu, who wrote to Anne Wortley in 1709: "Nobody was so entirely, so faithfully yours ... I put in your lovers, for I don't allow it possible for a man to be so sincere as I am." Similarly, English poet Anna Seward had a devoted friendship to Honora Sneyd, who was the subject of many of Seward's sonnets and poems. When Sneyd married despite Seward's protest, Seward's poems became angry. However, Seward continued to write about Sneyd long after her death, extolling Sneyd's beauty and their affection and friendship. As a young woman, writer and philosopher Mary Wollstonecraft was attached to a woman named Fanny Blood. Writing to another woman by whom she had recently felt betrayed, Wollstonecraft declared, "The roses will bloom when there's peace in the breast, and the prospect of living with my Fanny gladdens my heart:—You know not how I love her." Wollstonecraft's first novel "", in part, addressed her relationship with Fanny Blood. Perhaps the most famous of these romantic friendships was between Eleanor Butler and Sarah Ponsonby, nicknamed the Ladies of Llangollen. Butler and Ponsonby eloped in 1778, to the relief of Ponsonby's family (concerned about their reputation had she run away with a man) to live together in Wales for 51 years and be thought of as eccentrics. Their story was considered "the epitome of virtuous romantic friendship" and inspired poetry by Anna Seward and Henry Wadsworth Longfellow. Diarist Anne Lister, captivated by Butler and Ponsonby, recorded her affairs with women between 1817 and 1840. Some of it was written in code, detailing her sexual relationships with Marianna Belcombe and Maria Barlow. Both Lister and Eleanor Butler were considered masculine by contemporary news reports, and though there were suspicions that these relationships were sapphist in nature, they were nonetheless praised in literature. Romantic friendships were also popular in the U.S. Enigmatic poet Emily Dickinson wrote over 300 letters and poems to Susan Gilbert, who later became her sister-in-law, and engaged in another romantic correspondence with Kate Scott Anthon. Anthon broke off their relationship the same month Dickinson entered self-imposed lifelong seclusion. Nearby in Hartford, Connecticut, African American freeborn women Addie Brown and Rebecca Primus left evidence of their passion in letters: "No "kisses" is like youres". In Georgia, Alice Baldy wrote to Josie Varner in 1870, "Do you know that if you touch me, or speak to me there is not a nerve of fibre in my body that does not respond with a thrill of delight?" Around the turn of the 20th century, the development of higher education provided opportunities for women. In all-female surroundings, a culture of romantic pursuit was fostered in women's colleges. Older students mentored younger ones, called on them socially, took them to all-women dances, and sent them flowers, cards, and poems that declared their undying love for each other. These were called "smashes" or "spoons", and they were written about quite frankly in stories for girls aspiring to attend college in publications such as "Ladies Home Journal", a children's magazine titled "St. Nicholas", and a collection called "Smith College Stories", without negative views. Enduring loyalty, devotion, and love were major components to these stories, and sexual acts beyond kissing were consistently absent. Women who had the option of a career instead of marriage labeled themselves New Women, and took their new opportunities very seriously. Faderman calls this period "the last breath of innocence" before 1920 when characterizations of female affection were connected to sexuality, marking lesbians as a unique and often unflattering group. Specifically, Faderman connects the growth of women's independence and their beginning to reject strictly prescribed roles in the Victorian era to the scientific designation of lesbianism as a type of aberrant sexual behavior. For some women, the realization that they participated in behavior or relationships that could be categorized as lesbian caused them to deny or conceal it, such as professor Jeannette Augustus Marks at Mount Holyoke College, who lived with the college president, Mary Woolley, for 36 years. Marks discouraged young women from "abnormal" friendships and insisted happiness could only be attained with a man. Other women, however, embraced the distinction and used their uniqueness to set themselves apart from heterosexual women and gay men. From the 1890s to the 1930s, American heiress Natalie Clifford Barney held a weekly salon in Paris to which major artistic celebrities were invited and where lesbian topics were the focus. Combining Greek influences with contemporary French eroticism, she attempted to create an updated and idealized version of Lesbos in her salon. Her contemporaries included artist Romaine Brooks, who painted others in her circle; writers Colette, Djuna Barnes, social host Gertrude Stein, and novelist Radclyffe Hall. Berlin had a vibrant homosexual culture in the 1920s, and about 50 clubs existed that catered to lesbians. "Die Freundin" ("The Girlfriend") magazine, published between 1924 and 1933, targeted lesbians. "Garçonne" (aka "Frauenliebe" ("Woman Love")) was aimed at lesbians and male transvestites. These publications were controlled by men as owners, publishers, and writers. Around 1926, Selli Engler founded "Die BIF – Blätter Idealer Frauenfreundschaften" ("The BIF – Papers on Ideal Women Friendships"), the first lesbian publication owned, published and written by women. In 1928, the lesbian bar and nightclub guide "Berlins lesbische Frauen" ("The Lesbians of Berlin") by Ruth Margarite Röllig further popularized the German capital as a center of lesbian activity. Clubs varied between large establishments that became tourist attractions, to small neighborhood cafes where local women went to meet other women. The cabaret song "Das lila Lied" ("The Lavender Song") became an anthem to the lesbians of Berlin. Although it was sometimes tolerated, homosexuality was illegal in Germany and law enforcement used permitted gatherings as an opportunity to register the names of homosexuals for future reference. Magnus Hirschfeld's Scientific-Humanitarian Committee, which promoted tolerance for homosexuals in Germany, welcomed lesbian participation, and a surge of lesbian-themed writing and political activism in the German feminist movement became evident. In 1928, Radclyffe Hall published a novel titled "The Well of Loneliness". The novel's plot centers around Stephen Gordon, a woman who identifies herself as an invert after reading Krafft-Ebing's "Psychopathia Sexualis", and lives within the homosexual subculture of Paris. The novel included a foreword by Havelock Ellis and was intended to be a call for tolerance for inverts by publicizing their disadvantages and accidents of being born inverted. Hall subscribed to Ellis and Krafft-Ebing's theories and rejected Freud's theory that same-sex attraction was caused by childhood trauma and was curable. The publicity Hall received was due to unintended consequences; the novel was tried for obscenity in London, a spectacularly scandalous event described as ""the" crystallizing moment in the construction of a visible modern English lesbian subculture" by professor Laura Doan. Newspaper stories frankly divulged that the book's content includes "sexual relations between Lesbian women", and photographs of Hall often accompanied details about lesbians in most major print outlets within a span of six months. Hall reflected the appearance of a "mannish" woman in the 1920s: short cropped hair, tailored suits (often with pants), and monocle that became widely recognized as a "uniform". When British women participated in World War I, they became familiar with masculine clothing, and were considered patriotic for wearing uniforms and pants. However, postwar masculinization of women's clothing became associated with lesbians. In the United States, the 1920s was a decade of social experimentation, particularly with sex. This was heavily influenced by the writings of Sigmund Freud, who theorized that sexual desire would be sated unconsciously, despite an individual's wish to ignore it. Freud's theories were much more pervasive in the U.S. than in Europe. With the well-publicized notion that sexual acts were a part of lesbianism and their relationships, sexual experimentation was widespread. Large cities that provided a nightlife were immensely popular, and women began to seek out sexual adventure. Bisexuality became chic, particularly in America's first gay neighborhoods. No location saw more visitors for its possibilities of homosexual nightlife than Harlem, the predominantly African American section of New York City. White "slummers" enjoyed jazz, nightclubs, and anything else they wished. Blues singers Ma Rainey, Bessie Smith, Ethel Waters, and Gladys Bentley sang about affairs with women to visitors such as Tallulah Bankhead, Beatrice Lillie, and the soon-to-be-named Joan Crawford. Homosexuals began to draw comparisons between their newly recognized minority status and that of African Americans. Among African American residents of Harlem, lesbian relationships were common and tolerated, though not overtly embraced. Some women staged lavish wedding ceremonies, even filing licenses using masculine names with New York City. Most women, however, were married to men and participated in affairs with women regularly; bisexuality was more widely accepted than lesbianism. Across town, Greenwich Village also saw a growing homosexual community; both Harlem and Greenwich Village provided furnished rooms for single men and women, which was a major factor in their development as centers for homosexual communities. The tenor was different in Greenwich Village than Harlem, however. Bohemians—intellectuals who rejected Victorian ideals—gathered in the Village. Homosexuals were predominantly male, although figures such as poet Edna St. Vincent Millay and social host Mabel Dodge were known for their affairs with women and promotion of tolerance of homosexuality. Women in the U.S. who could not visit Harlem or live in Greenwich Village for the first time were able to visit saloons in the 1920s without being considered prostitutes. The existence of a public space for women to socialize in bars that were known to cater to lesbians "became the single most important public manifestation of the subculture for many decades", according to historian Lillian Faderman. The primary component necessary to encourage lesbians to be public and seek other women was economic independence, which virtually disappeared in the 1930s with the Great Depression. Most women in the U.S. found it necessary to marry, to a "front" such as a gay man where both could pursue homosexual relationships with public discretion, or to a man who expected a traditional wife. Independent women in the 1930s were generally seen as holding jobs that men should have. The social attitude made very small and close-knit communities in large cities that centered around bars, while simultaneously isolating women in other locales. Speaking of homosexuality in any context was socially forbidden, and women rarely discussed lesbianism even amongst themselves; they referred to openly gay people as "in the Life". Freudian psychoanalytic theory was pervasive in influencing doctors to consider homosexuality as a neurosis afflicting immature women. Homosexual subculture disappeared in Germany with the rise of the Nazis in 1933. The onset of World War II caused a massive upheaval in people's lives as military mobilization engaged millions of men. Women were also accepted into the military in the U.S. Women's Army Corps (WACs) and U.S. Navy's Women Accepted for Volunteer Emergency Service (WAVES). Unlike processes to screen out male homosexuals, which had been in place since the creation of the American military, there were no methods to identify or screen for lesbians; they were put into place gradually during World War II. Despite common attitudes regarding women's traditional roles in the 1930s, independent and masculine women were directly recruited by the military in the 1940s, and frailty discouraged. Some women were able to arrive at the recruiting station in a man's suit, deny ever having been in love with another woman, and be easily inducted. Sexual activity, however, was forbidden, and blue discharge was almost certain if one identified oneself as a lesbian. As women found each other, they formed into tight groups on base, socialized at service clubs, and began to use code words. Historian Allan Bérubé documented that homosexuals in the armed forces either consciously or subconsciously refused to identify themselves as homosexual or lesbian, and also never spoke about others' orientation. The most masculine women were not necessarily common, though they were visible so they tended to attract women interested in finding other lesbians. Women had to broach the subject about their interest in other women carefully, sometimes taking days to develop a common understanding without asking or stating anything outright. Women who did not enter the military were aggressively called upon to take industrial jobs left by men, in order to continue national productivity. The increased mobility, sophistication, and independence of many women during and after the war made it possible for women to live without husbands, something that would not have been feasible under different economic and social circumstances, further shaping lesbian networks and environments. Lesbians were not included under Paragraph 175, a German statute which made homosexual acts between males a crime. The United States Holocaust Memorial Museum stipulates that this is because women were seen as subordinate to men, and that the Nazi state feared lesbians less than gay men. However, the USHMM also claims that many women were arrested and imprisoned for "asocial" behaviour, a label which was applied to women who did not conform to the ideal Nazi image of a woman: cooking, cleaning, kitchen work, child raising, and passivity. These women were labeled with a black triangle. Some lesbians reclaimed this symbol for themselves as gay men reclaimed the pink triangle. Many lesbians also reclaimed the pink triangle. Following World War II, a nationwide movement pressed to return to pre-war society as quickly as possible in the U.S. When combined with the increasing national paranoia about communism and psychoanalytic theory that had become pervasive in medical knowledge, homosexuality became an undesired characteristic of employees working for the U.S. government in 1950. Homosexuals were thought to be vulnerable targets to blackmail, and the government purged its employment ranks of open homosexuals, beginning a widespread effort to gather intelligence about employees' private lives. State and local governments followed suit, arresting people for congregating in bars and parks, and enacting laws against cross-dressing for men and women. The U.S. military and government conducted many interrogations, asking if women had ever had sexual relations with another woman and essentially equating even a one-time experience to a criminal identity, thereby severely delineating heterosexuals from homosexuals. In 1952 homosexuality was listed as a pathological emotional disturbance in the American Psychiatric Association's "Diagnostic and Statistical Manual". The view that homosexuality was a curable sickness was widely believed in the medical community, general population, and among many lesbians themselves. Attitudes and practices to ferret out homosexuals in public service positions extended to Australia and Canada. A section to create an offence of "gross indecency" between females was added to a bill in the United Kingdom House of Commons and passed there in 1921, but was rejected in the House of Lords, apparently because they were concerned any attention paid to sexual misconduct would also promote it. Very little information was available about homosexuality beyond medical and psychiatric texts. Community meeting places consisted of bars that were commonly raided by police once a month on average, with those arrested exposed in newspapers. In response, eight women in San Francisco met in their living rooms in 1955 to socialize and have a safe place to dance. When they decided to make it a regular meeting, they became the first organization for lesbians in the U.S., titled the Daughters of Bilitis (DOB). The DOB began publishing a magazine titled "The Ladder" in 1956. Inside the front cover of every issue was their mission statement, the first of which stated was "Education of the variant". It was intended to provide women with knowledge about homosexuality—specifically relating to women and famous lesbians in history. However, by 1956, the term "lesbian" had such a negative meaning that the DOB refused to use it as a descriptor, choosing "variant" instead. The DOB spread to Chicago, New York, and Los Angeles, and "The Ladder" was mailed to hundreds—eventually thousands—of DOB members discussing the nature of homosexuality, sometimes challenging the idea that it was a sickness, with readers offering their own reasons why they were lesbians and suggesting ways to cope with the condition or society's response to it. British lesbians followed with the publication of "Arena Three" beginning in 1964, with a similar mission. As a reflection of categories of sexuality so sharply defined by the government and society at large, lesbian subculture developed extremely rigid gender roles between women, particularly among the working class in the U.S. and Canada. Although many municipalities had enacted laws against cross-dressing, some women would socialize in bars as butches: dressed in men's clothing and mirroring traditional masculine behavior. Others wore traditionally feminine clothing and assumed a more diminutive role as femmes. Butch and femme modes of socialization were so integral within lesbian bars that women who refused to choose between the two would be ignored, or at least unable to date anyone, and butch women becoming romantically involved with other butch women or femmes with other femmes was unacceptable. Butch women were not a novelty in the 1950s; even in Harlem and Greenwich Village in the 1920s some women assumed these personae. In the 1950s and 1960s, however, the roles were pervasive and not limited to North America: from 1940 to 1970, butch/femme bar culture flourished in Britain, though there were fewer class distinctions. They further identified members of a group that had been marginalized; women who had been rejected by most of society had an inside view of an exclusive group of people that took a high amount of knowledge to function in. Butch and femme were considered coarse by American lesbians of higher social standing during this period. Many wealthier women married to satisfy their familial obligations, and others escaped to Europe to live as expatriates. Regardless of the lack of information about homosexuality in scholarly texts, another forum for learning about lesbianism was growing. A paperback book titled "Women's Barracks" describing a woman's experiences in the Free French Forces was published in 1950. It told of a lesbian relationship the author had witnessed. After 4.5 million copies were sold, it was consequently named in the House Select Committee on Current Pornographic Materials in 1952. Its publisher, Gold Medal Books, followed with the novel "Spring Fire" in 1952, which sold 1.5 million copies. Gold Medal Books was overwhelmed with mail from women writing about the subject matter, and followed with more books, creating the genre of lesbian pulp fiction. Between 1955 and 1969 over 2,000 books were published using lesbianism as a topic, and they were sold in corner drugstores, train stations, bus stops, and newsstands all over the U.S. and Canada. Most were written by, and almost all were marketed to heterosexual men. Coded words and images were used on the covers. Instead of "lesbian", terms such as "strange", "twilight", "queer", and "third sex", were used in the titles, and cover art was invariably salacious. A handful of lesbian pulp fiction authors were women writing for lesbians, including Ann Bannon, Valerie Taylor, Paula Christian, and Vin Packer/Ann Aldrich. Bannon, who also purchased lesbian pulp fiction, later stated that women identified the material iconically by the cover art. Many of the books used cultural references: naming places, terms, describing modes of dress and other codes to isolated women. As a result, pulp fiction helped to proliferate a lesbian identity simultaneously to lesbians and heterosexual readers. The social rigidity of the 1950s and early 1960s encountered a backlash as social movements to improve the standing of African Americans, the poor, women, and gays all became prominent. Of the latter two, the gay rights movement and the feminist movement connected after a violent confrontation occurred in New York City in the 1969 Stonewall riots. What followed was a movement characterized by a surge of gay activism and feminist consciousness that further transformed the definition of lesbian. The sexual revolution in the 1970s introduced the differentiation between identity and sexual behavior for women. Many women took advantage of their new social freedom to try new experiences. Women who previously identified as heterosexual tried sex with women, though many maintained their heterosexual identity. However, with the advent of second wave feminism, lesbian as a political identity grew to describe a social philosophy among women, often overshadowing sexual desire as a defining trait. A militant feminist organization named Radicalesbians published a manifesto in 1970 entitled "The Woman-Identified Woman" that declared "A lesbian is the rage of all women condensed to the point of explosion". Militant feminists expressed their disdain with an inherently sexist and patriarchal society, and concluded the most effective way to overcome sexism and attain the equality of women would be to deny men any power or pleasure from women. For women who subscribed to this philosophy—dubbing themselves lesbian-feminists—lesbian was a term chosen by women to describe any woman who dedicated her approach to social interaction and political motivation to the welfare of women. Sexual desire was not the defining characteristic of a lesbian-feminist, but rather her focus on politics. Independence from men as oppressors was a central tenet of lesbian-feminism, and many believers strove to separate themselves physically and economically from traditional male-centered culture. In the ideal society, named Lesbian Nation, "woman" and "lesbian" were interchangeable. Although lesbian-feminism was a significant shift, not all lesbians agreed with it. Lesbian-feminism was a youth-oriented movement: its members were primarily college educated, with experience in New Left and radical causes, but they had not seen any success in persuading radical organizations to take up women's issues. Many older lesbians who had acknowledged their sexuality in more conservative times felt maintaining their ways of coping in a homophobic world was more appropriate. The Daughters of Bilitis folded in 1970 over which direction to focus on: feminism or gay rights issues. As equality was a priority for lesbian-feminists, disparity of roles between men and women or butch and femme were viewed as patriarchal. Lesbian-feminists eschewed gender role play that had been pervasive in bars, as well as the perceived chauvinism of gay men; many lesbian-feminists refused to work with gay men, or take up their causes. However, lesbians who held a more essentialist view that they had been born homosexual and used the descriptor "lesbian" to define sexual attraction, often considered the separatist, angry opinions of lesbian-feminists to be detrimental to the cause of gay rights. In 1980, poet and essayist Adrienne Rich expanded upon the political meaning of lesbian by proposing a continuum of lesbian existence based on "woman-identified experience" in her essay "Compulsory Heterosexuality and Lesbian Existence". All relationships between women, Rich proposed, have some lesbian element, regardless if they claim a lesbian identity: mothers and daughters, women who work together, and women who nurse each other, for example. Such a perception of women relating to each other connects them through time and across cultures, and Rich considered heterosexuality a condition forced upon women by men. Several years earlier, DOB founders Del Martin and Phyllis Lyon similarly relegated sexual acts as unnecessary in determining what a lesbian is, by providing their definition: "a woman whose primary erotic, psychological, emotional and social interest is in a member of her own sex, even though that interest may not be overtly expressed". Arabic-language historical records have used various terms to describe sexual practices between women. A common one is "sahq" which refers to the act of "rubbing." Lesbian practices and identities are, however, largely absent from the historical record. The common term to describe lesbianism in Arabic today is essentially the same term used to describe men, and thus the distinction between male and female homosexuality is to a certain extent linguistically obscured in contemporary queer discourse. Overall, the study of contemporary lesbian experience in the region is complicated by power dynamics in the postcolonial context, shaped even by what some scholars refer to as "homonationalism," the use of politicized understanding of sexual categories to advance specific national interests on the domestic and international stage. Female homosexual behavior may be present in every culture, although the concept of a lesbian as a woman who pairs exclusively with other women is not. Attitudes about female homosexual behavior are dependent upon women's roles in each society and each culture's definition of sex. Women in the Middle East have been historically segregated from men. In the 7th and 8th centuries, some extraordinary women dressed in male attire when gender roles were less strict, but the sexual roles that accompanied European women were not associated with Islamic women. The Caliphal court in Baghdad featured women who dressed as men, including false facial hair, but they competed with other women for the attentions of men. According to the 12th century writings of Sharif al-Idrisi, highly intelligent women were more likely to be lesbians; their intellectual prowess put them on a more even par with men. Relations between women who lived in harems and fears of women being sexually intimate in Turkish baths were expressed in writings by men. Women, however, were mostly silent and men likewise rarely wrote about lesbian relationships. It is unclear to historians if the rare instances of lesbianism mentioned in literature are an accurate historical record or intended to serve as fantasies for men. A 1978 treatise about repression in Iran asserted that women were completely silenced: "In the whole of Iranian history, [no woman] has been allowed to speak out for such tendencies ... To attest to lesbian desires would be an unforgivable crime." Although the authors of "Islamic Homosexualities" argued this did not mean women could not engage in lesbian relationships, a lesbian anthropologist in 1991 visited Yemen and reported that women in the town she visited were unable to comprehend her romantic relationship to another woman. Women in Pakistan are expected to marry men; those who do not are ostracized. Women, however, may have intimate relations with other women as long as their wifely duties are met, their private matters are kept quiet, and the woman with whom they are involved is somehow related by family or logical interest to her lover. Individuals identifying with or otherwise engaging in lesbian practices in the region can face family violence and societal persecution, including what are commonly referred to as "honor killings." The justifications provided by murderers relate to a person's perceived sexual immorality, loss of virginity (outside of acceptable frames of marriage), and target female victims primarily. Some Indigenous peoples of the Americas conceptualize a third gender for women who dress as, and fulfill the roles usually filled by, men in their cultures. In other cases they may see gender as a spectrum, and use different terms for feminine women and masculine women. However, these identities are rooted in the context of the ceremonial and cultural lives of the particular Indigenous cultures, and "simply being gay and Indian does not make someone a Two-Spirit." These ceremonial and social roles, which are conferred and confirmed by the person's elders, "do not make sense" when defined by non-Native concepts of sexual orientation and gender identity. Rather, they must be understood in an Indigenous context, as traditional spiritual and social roles held by the person in their Indigenous community. In Latin America, lesbian consciousness and associations appeared in the 1970s, increasing while several countries transitioned to or reformed democratic governments. Harassment and intimidation have been common even in places where homosexuality is legal, and laws against child corruption, morality, or "the good ways" ("faltas a la moral o las buenas costumbres"), have been used to persecute homosexuals. From the Hispanic perspective, the conflict between the lesbophobia of some feminists and the misogyny from gay men has created a difficult path for lesbians and associated groups. Argentina was the first Latin American country with a gay rights group, "Nuestro Mundo" (NM, or Our World), created in 1969. Six mostly secret organizations concentrating on gay or lesbian issues were founded around this time, but persecution and harassment were continuous and grew worse with the dictatorship of Jorge Rafael Videla in 1976, when all groups were dissolved in the Dirty War. Lesbian rights groups have gradually formed since 1986 to build a cohesive community that works to overcome philosophical differences with heterosexual women. The Latin American lesbian movement has been the most active in Mexico, but has encountered similar problems in effectiveness and cohesion. While groups try to promote lesbian issues and concerns, they also face misogynistic attitudes from gay men and homophobic views from heterosexual women. In 1977, "Lesbos", the first lesbian organization for Mexicans, was formed. Several incarnations of political groups promoting lesbian issues have evolved; 13 lesbian organizations were active in Mexico City in 1997. Ultimately, however, lesbian associations have had little influence both on the homosexual and feminist movements. In Chile, the dictatorship of Augusto Pinochet forbade the creation of lesbian groups until 1984, when "Ayuquelén" ("joy of being" in Mapuche) was first founded, prompted by the very public beating death of a woman amid shouts of "Damned lesbian!" from her attacker. The lesbian movement has been closely associated with the feminist movement in Chile, although the relationship has been sometimes strained. "Ayuquelén" worked with the International Lesbian Information Service, the International Lesbian, Gay, Bisexual, Trans and Intersex Association, and the Chilean gay rights group "Movimiento de Integración y Liberación Homosexual" (Movement to Integrate and Liberate Homosexuals) to remove the sodomy law still in force in Chile. Lesbian consciousness became more visible in Nicaragua in 1986, when the Sandinista National Liberation Front expelled gay men and lesbians from its midst. State persecution prevented the formation of associations until AIDS became a concern, when educational efforts forced sexual minorities to band together. The first lesbian organization was "Nosotras", founded in 1989. An effort to promote visibility from 1991 to 1992 provoked the government to declare homosexuality illegal in 1994, effectively ending the movement, until 2004, when "Grupo Safo – Grupo de Mujeres Lesbianas de Nicaragua" was created, four years before homosexuality became legal again. The meetings of feminist lesbians of Latin America and the Caribbean, sometimes shortened to "Lesbian meetings", have been an important forum for the exchange of ideas for Latin American lesbians since the late 1980s. With rotating hosts and biannual gatherings, its main aims are the creation of communication networks, to change the situation of lesbians in Latin America (both legally and socially), to increase solidarity between lesbians and to destroy the existing myths about them. Cross-gender roles and marriage between women has also been recorded in over 30 African societies. Women may marry other women, raise their children, and be generally thought of as men in societies in Nigeria, Cameroon, and Kenya. The Hausa people of Sudan have a term equivalent to lesbian, "kifi", that may also be applied to males to mean "neither party insists on a particular sexual role". Near the Congo River a female who participates in strong emotional or sexual relationships with another female among the Nkundo people is known as "yaikya bonsángo" (a woman who presses against another woman). Lesbian relationships are also known in matrilineal societies in Ghana among the Akan people. In Lesotho, females engage in what is commonly considered sexual behavior to the Western world: they kiss, sleep together, rub genitals, participate in cunnilingus, and maintain their relationships with other females vigilantly. Since the people of Lesotho believe sex requires a penis, however, they do not consider their behavior sexual, nor label themselves lesbians. In South Africa, lesbians are raped by heterosexual men with a goal of punishment of "abnormal" behavior and reinforcement of societal norms. The crime was first identified in South Africa where it is sometimes supervised by members of the woman's family or local community, and is a major contributor to HIV infection in South African lesbians. "Corrective rape" is not recognized by the South African legal system as a hate crime despite the fact that the South African Constitution states that no person shall be discriminated against based on their social status and identity, including sexual orientation. Legally, South Africa protects gay rights extensively, but the government has not taken proactive action to prevent corrective rape, and women do not have much faith in the police and their investigations. Corrective rape is reported to be on the rise in South Africa. The South African nonprofit "Luleki Sizwe" estimates that more than 10 lesbians are raped or gang-raped on a weekly basis. As made public by the Triangle Project in 2008, at least 500 lesbians become victims of corrective rape every year and 86% of black lesbians in the Western Cape live in fear of being sexually assaulted. Victims of corrective rape are less likely to report the crime because of their society's negative beliefs about homosexuality. China before westernization was another society that segregated men from women. Historical Chinese culture has not recognized a concept of sexual orientation, or a framework to divide people based on their same-sex or opposite-sex attractions. Although there was a significant culture surrounding homosexual men, there was none for women. Outside their duties to bear sons to their husbands, women were perceived as having no sexuality at all. This did not mean that women could not pursue sexual relationships with other women, but that such associations could not impose upon women's relationships to men. Rare references to lesbianism were written by Ying Shao, who identified same-sex relationships between women in imperial courts who behaved as husband and wife as "dui shi" (paired eating). "Golden Orchid Associations" in Southern China existed into the 20th century and promoted formal marriages between women, who were then allowed to adopt children. Westernization brought new ideas that all sexual behavior not resulting in reproduction was aberrant. The liberty of being employed in silk factories starting in 1865 allowed some women to style themselves "tzu-shu nii" (never to marry) and live in communes with other women. Other Chinese called them "sou-hei" (self-combers) for adopting hairstyles of married women. These communes passed because of the Great Depression and were subsequently discouraged by the communist government for being a relic of feudal China. In contemporary Chinese society, "tongzhi" (same goal or spirit) is the term used to refer to homosexuals; most Chinese are reluctant to divide this classification further to identify lesbians. In Japan, the term "rezubian", a Japanese pronunciation of "lesbian", was used during the 1920s. Westernization brought more independence for women and allowed some Japanese women to wear pants. The cognate tomboy is used in the Philippines, and particularly in Manila, to denote women who are more masculine. Virtuous women in Korea prioritize motherhood, chastity, and virginity; outside this scope, very few women are free to express themselves through sexuality, although there is a growing organization for lesbians named "Kkirikkiri". The term "pondan" is used in Malaysia to refer to gay men, but since there is no historical context to reference lesbians, the term is used for female homosexuals as well. As in many Asian countries, open homosexuality is discouraged in many social levels, so many Malaysians lead double lives. In India, a 14th-century Indian text mentioning a lesbian couple who had a child as a result of their lovemaking is an exception to the general silence about female homosexuality. According to Ruth Vanita, this invisibility disappeared with the release of a film titled "Fire" in 1996, prompting some theaters in India to be attacked by religious extremists. Terms used to label homosexuals are often rejected by Indian activists for being the result of imperialist influence, but most discourse on homosexuality centers on men. Women's rights groups in India continue to debate the legitimacy of including lesbian issues in their platforms, as lesbians and material focusing on female homosexuality are frequently suppressed. The most extensive early study of female homosexuality was provided by the Institute for Sex Research, who published an in-depth report of the sexual experiences of American women in 1953. More than 8,000 women were interviewed by Alfred Kinsey and the staff of the Institute for Sex Research in a book titled "Sexual Behavior in the Human Female", popularly known as part of the Kinsey Report. The Kinsey Report's dispassionate discussion of homosexuality as a form of human sexual behavior was revolutionary. Up to this study, only physicians and psychiatrists studied sexual behavior, and almost always the results were interpreted with a moral view. Kinsey and his staff reported that 28% of women had been aroused by another female, and 19% had a sexual contact with another female. Of women who had sexual contact with another female, half to two-thirds of them had orgasmed. Single women had the highest prevalence of homosexual activity, followed by women who were widowed, divorced, or separated. The lowest occurrence of sexual activity was among married women; those with previous homosexual experience reported they married to stop homosexual activity. Most of the women who reported homosexual activity had not experienced it more than ten times. Fifty-one percent of women reporting homosexual experience had only one partner. Women with post-graduate education had a higher prevalence of homosexual experience, followed by women with a college education; the smallest occurrence was among women with education no higher than eighth grade. However, Kinsey's methodology was criticized. Based on Kinsey's scale where 0 represents a person with an exclusively heterosexual response and 6 represents a person with an exclusively homosexual one, and numbers in between represent a gradient of responses with both sexes, 6% of those interviewed ranked as a 6: exclusively homosexual. Apart from those who ranked 0 (71%), the largest percentage in between 0 and 6 was 1 at approximately 15%. However, the Kinsey Report remarked that the ranking described a period in a person's life, and that a person's orientation may change. Among the criticisms the Kinsey Report received, a particular one addressed the Institute for Sex Research's tendency to use statistical sampling, which facilitated an over-representation of same-sex relationships by other researchers who did not adhere to Kinsey's qualifications of data. Twenty-three years later, in 1976, sexologist Shere Hite published a report on the sexual encounters of 3,019 women who had responded to questionnaires, under the title "The Hite Report". Hite's questions differed from Kinsey's, focusing more on how women identified, or what they preferred rather than experience. Respondents to Hite's questions indicated that 8% preferred sex with women and 9% answered that they identified as bisexual or had sexual experiences with men and women, though they refused to indicate preference. Hite's conclusions are more based on respondents' comments than quantifiable data. She found it "striking" that many women who had no lesbian experiences indicated they were interested in sex with women, particularly because the question was not asked. Hite found the two most significant differences between respondents' experience with men and women were the focus on clitoral stimulation, and more emotional involvement and orgasmic responses. Since Hite performed her study during the popularity of feminism in the 1970s, she also acknowledged that women may have chosen the political identity of a lesbian. Lesbians in the U.S. are estimated to be about 2.6% of the population, according to a National Opinion Research Center survey of sexually active adults who had had same-sex experiences within the past year, completed in 2000. A survey of same-sex couples in the United States showed that between 2000 and 2005, the number of people claiming to be in same-sex relationships increased by 30%—five times the rate of population growth in the U.S. The study attributed the jump to people being more comfortable self-identifying as homosexual to the federal government. The government of the United Kingdom does not ask citizens to define their sexuality. However, a survey by the UK Office for National Statistics (ONS) in 2010 found that 1.5% of Britons identified themselves as gay or bisexual, and the ONS suggests that this is in line with other surveys showing the number between 0.3% and 3%. Estimates of lesbians are sometimes not differentiated in studies of same-sex households, such as those performed by the U.S. census, and estimates of total gay, lesbian, or bisexual population by the UK government. However, polls in Australia have recorded a range of self-identified lesbian or bisexual women from 1.3% to 2.2% of the total population. In terms of medical issues, lesbians are referred to as women who have sex with women (WSW) because of the misconceptions and assumptions about women's sexuality and some women's hesitancy to disclose their accurate sexual histories even to a physician. Many self-identified lesbians neglect to see a physician because they do not participate in heterosexual activity and require no birth control, which is the initiating factor for most women to seek consultation with a gynecologist when they become sexually active. As a result, many lesbians are not screened regularly with Pap smears. The U.S. government reports that some lesbians neglect seeking medical screening in the U.S.; they lack health insurance because many employers do not offer health benefits to domestic partners. The result of the lack of medical information on WSW is that medical professionals and some lesbians perceive lesbians as having lower risks of acquiring sexually transmitted diseases or types of cancer. When women do seek medical attention, medical professionals often fail to take a complete medical history. In a 2006 study of 2,345 lesbian and bisexual women, only 9.3% had claimed they had ever been asked their sexual orientation by a physician. A third of the respondents believed disclosing their sexual history would result in a negative reaction, and 30% had received a negative reaction from a medical professional after identifying themselves as lesbian or bisexual. A patient's complete history helps medical professionals identify higher risk areas and corrects assumptions about the personal histories of women. In a similar survey of 6,935 lesbians, 77% had had sexual contact with one or more male partners, and 6% had that contact within the previous year. Heart disease is listed by the U.S. Department of Health and Human Services as the number one cause of death for all women. Factors that add to risk of heart disease include obesity and smoking, both of which are more prevalent in lesbians. Studies show that lesbians have a higher body mass and are generally less concerned about weight issues than heterosexual women, and lesbians consider women with higher body masses to be more attractive than heterosexual women do. Lesbians are more likely to exercise regularly than heterosexual women, and lesbians do not generally exercise for aesthetic reasons, although heterosexual women do. Research is needed to determine specific causes of obesity in lesbians. Lack of differentiation between homosexual and heterosexual women in medical studies that concentrate on health issues for women skews results for lesbians and non-lesbian women. Reports are inconclusive about occurrence of breast cancer in lesbians. It has been determined, however, that the lower rate of lesbians tested by regular Pap smears makes it more difficult to detect cervical cancer at early stages in lesbians. The risk factors for developing ovarian cancer rates are higher in lesbians than heterosexual women, perhaps because many lesbians lack protective factors of pregnancy, abortion, contraceptives, breast feeding, and miscarriages. Some sexually transmitted diseases are communicable between women, including human papillomavirus (HPV)—specifically genital warts—squamous intraepithelial lesions, trichomoniasis, syphilis, and herpes simplex virus (HSV). Transmission of specific sexually transmitted diseases among women who have sex with women depends on the sexual practices women engage in. Any object that comes in contact with cervical secretions, vaginal mucosa, or menstrual blood, including fingers or penetrative objects may transmit sexually transmitted diseases. Orogenital contact may indicate a higher risk of acquiring HSV, even among women who have had no prior sex with men. Bacterial vaginosis (BV) occurs more often in lesbians, but it is unclear if BV is transmitted by sexual contact; it occurs in celibate as well as sexually active women. BV often occurs in both partners in a lesbian relationship; a recent study of women with BV found that 81% had partners with BV. Lesbians are not included in a category of frequency of human immunodeficiency virus (HIV) transmission, although transmission is possible through vaginal and cervical secretions. The highest rate of transmission of HIV to lesbians is among women who participate in intravenous drug use or have sexual intercourse with bisexual men. Since medical literature began to describe homosexuality, it has often been approached from a view that sought to find an inherent psychopathology as the root cause, influenced by the theories of Sigmund Freud. Although he considered bisexuality inherent in all people, and said that most have phases of homosexual attraction or experimentation, exclusive same-sex attraction he attributed to stunted development resulting from trauma or parental conflicts. Much literature on mental health and lesbians centered on their depression, substance abuse, and suicide. Although these issues exist among lesbians, discussion about their causes shifted after homosexuality was removed from the "Diagnostic and Statistical Manual" in 1973. Instead, social ostracism, legal discrimination, internalization of negative stereotypes, and limited support structures indicate factors homosexuals face in Western societies that often adversely affect their mental health. Women who identify as lesbian report feeling significantly different and isolated during adolescence. These emotions have been cited as appearing on average at 15 years old in lesbians and 18 years old in women who identify as bisexual. On the whole, women tend to work through developing a self-concept internally, or with other women with whom they are intimate. Women also limit who they divulge their sexual identities to, and more often see being lesbian as a choice, as opposed to gay men, who work more externally and see being gay as outside their control. Anxiety disorders and depression are the most common mental health issues for women. Depression is reported among lesbians at a rate similar to heterosexual women, although generalized anxiety disorder is more likely to appear among lesbian and bisexual women than heterosexual women. Depression is a more significant problem among women who feel they must hide their sexual orientation from friends and family, or experience compounded ethnic or religious discrimination, or endure relationship difficulties with no support system. Men's shaping of women's sexuality has proven to have an effect on how lesbians see their own bodies. Studies have shown that heterosexual men and lesbians have different standards for what they consider attractive in women. Lesbians who view themselves with male standards of female beauty may experience lower self-esteem, eating disorders, and higher incidence of depression. More than half the respondents to a 1994 survey of health issues in lesbians reported they had suicidal thoughts, and 18% had attempted suicide. A population-based study completed by the National Alcohol Research Center found that women who identify as lesbian or bisexual are less likely to abstain from alcohol. Lesbians and bisexual women have a higher likelihood of reporting problems with alcohol, as well as not being satisfied with treatment for substance abuse programs. Many lesbian communities are centered in bars, and drinking is an activity that correlates to community participation for lesbians and bisexual women. Lesbians portrayed in literature, film, and television often shape contemporary thought about women's sexuality. The majority of media about lesbians is produced by men; women's publishing companies did not develop until the 1970s, films about lesbians made by women did not appear until the 1980s, and television shows portraying lesbians written by women only began to be created in the 21st century. As a result, homosexuality—particularly dealing with women—has been excluded because of symbolic annihilation. When depictions of lesbians began to surface, they were often one-dimensional, simplified stereotypes. In addition to Sappho's accomplishments, literary historian Jeannette Howard Foster includes the Book of Ruth, and ancient mythological tradition as examples of lesbianism in classical literature. Greek stories of the heavens often included a female figure whose virtue and virginity were unspoiled, who pursued more masculine interests, and who was followed by a dedicated group of maidens. Foster cites Camilla and Diana, Artemis and Callisto, and Iphis and Ianthe as examples of female mythological figures who showed remarkable devotion to each other, or defied gender expectations. The Greeks are also given credit with spreading the story of a mythological race of women warriors named Amazons. En-hedu-ana, a priestess in Ancient Iraq who dedicated herself to the Sumerian goddess Inanna, has the distinction of signing the oldest-surviving signed poetry in history. She characterized herself as Inanna's spouse. For ten centuries after the fall of the Roman Empire, lesbianism disappeared from literature. Foster points to the particularly strict view that Eve—representative of all women—caused the downfall of mankind; original sin among women was a particular concern, especially because women were perceived as creating life. During this time, women were largely illiterate and not encouraged to engage in intellectual pursuit, so men were responsible for shaping ideas about sexuality. In the 15th and 16th centuries, French and English depictions of relationships between women ("Lives of Gallant Ladies" by Brantôme in 1665, John Cleland's 1749 erotica "Memoirs of a Woman of Pleasure", "L'Espion Anglais" by various authors in 1778), writers' attitudes spanned from amused tolerance to arousal, whereupon a male character would participate to complete the act. Physical relationships between women were often encouraged; men felt no threat as they viewed sexual acts between women to be accepted when men were not available, and not comparable to fulfillment that could be achieved by sexual acts between men and women. At worst, if a woman became enamored of another woman, she became a tragic figure. Physical and therefore emotional satisfaction was considered impossible without a natural phallus. Male intervention into relationships between women was necessary only when women acted as men and demanded the same social privileges. Lesbianism became almost exclusive to French literature in the 19th century, based on male fantasy and the desire to shock bourgeois moral values. Honoré de Balzac, in "The Girl with the Golden Eyes" (1835), employed lesbianism in his story about three people living amongst the moral degeneration of Paris, and again in "Cousin Bette" and "Séraphîta". His work influenced novelist Théophile Gautier's "Mademoiselle de Maupin", which provided the first description of a physical type that became associated with lesbians: tall, wide-shouldered, slim-hipped, and athletically inclined. Charles Baudelaire repeatedly used lesbianism as a theme in his poems "Lesbos", "Femmes damnées 1" ("Damned Women"), and "Femmes damnées 2". Reflecting French society, as well as employing stock character associations, many of the lesbian characters in 19th-century French literature were prostitutes or courtesans: personifications of vice who died early, violent deaths in moral endings. Samuel Taylor Coleridge's 1816 poem "Christabel" and the novella "Carmilla" (1872) by Sheridan Le Fanu both present lesbianism associated with vampirism. Portrayals of female homosexuality not only formed European consciousness about lesbianism, but Krafft-Ebing cited the characters in Gustave Flaubert's "Salammbô" (1862) and Ernest Feydeau's "Le Comte de Chalis" (1867) as examples of lesbians because both novels feature female protagonists who do not adhere to social norms and express "contrary sexual feeling", although neither participated in same-sex desire or sexual behavior. Havelock Ellis used literary examples from Balzac and several French poets and writers to develop his framework to identify sexual inversion in women. Gradually, women began to author their own thoughts and literary works about lesbian relationships. Until the publication of "The Well of Loneliness", most major works involving lesbianism were penned by men. Foster suggests that women would have encountered suspicion about their own lives had they used same-sex love as a topic, and that some writers including Louise Labé, Charlotte Charke, and Margaret Fuller either changed the pronouns in their literary works to male, or made them ambiguous. Author George Sand was portrayed as a character in several works in the 19th century; writer Mario Praz credited the popularity of lesbianism as a theme to Sand's appearance in Paris society in the 1830s. Charlotte Brontë's "Villette" in 1853 initiated a genre of boarding school stories with homoerotic themes. In the 20th century, Katherine Mansfield, Amy Lowell, Gertrude Stein, H.D., Vita Sackville-West, Virginia Woolf, and Gale Wilhelm wrote popular works that had same-sex relationships as themes. Some women, such as Marguerite Yourcenar and Mary Renault, wrote or translated works of fiction that focused on homosexual men, like some of the writings of Carson McCullers. All three were involved in same-sex relationships, but their primary friendships were with gay men. Foster further asserts 1928 was a "peak year" for lesbian-themed literature; in addition to "The Well of Loneliness", three other novels with lesbian themes were published in England: Elizabeth Bowen's "The Hotel", Woolf's "", and Compton Mackenzie's satirical novel "Extraordinary Women". Unlike "The Well of Loneliness", none of these novels were banned. As the paperback book came into fashion, lesbian themes were relegated to pulp fiction. Many of the pulp novels typically presented very unhappy women, or relationships that ended tragically. Marijane Meaker later wrote that she was told to make the relationship end badly in "Spring Fire" because the publishers were concerned about the books being confiscated by the U.S. Postal Service. Patricia Highsmith, writing as Claire Morgan, wrote "The Price of Salt" in 1951 and refused to follow this directive, but instead used a pseudonym. Following the Stonewall riots, lesbian themes in literature became much more diverse and complex, and shifted the focus of lesbianism from erotica for heterosexual men to works written by and for lesbians. Feminist magazines such as "The Furies", and "Sinister Wisdom" replaced "The Ladder". Serious writers who used lesbian characters and plots included Rita Mae Brown's "Rubyfruit Jungle" (1973), which presents a feminist heroine who chooses to be a lesbian. Poet Audre Lorde confronts homophobia and racism in her works, and Cherríe Moraga is credited with being primarily responsible for bringing Latina perspectives to lesbian literature. Further changing values are evident in the writings of Dorothy Allison, who focuses on child sexual abuse and deliberately provocative lesbian sadomasochism themes. Lesbianism, or the suggestion of it, began early in filmmaking. The same constructs of how lesbians were portrayed—or for what reasons—as what had appeared in literature were placed on women in the films. Women challenging their feminine roles was a device more easily accepted than men challenging masculine ones. Actresses appeared as men in male roles because of plot devices as early as 1914 in "A Florida Enchantment" featuring Edith Storey. In "Morocco" (1930) Marlene Dietrich kisses another woman on the lips, and Katharine Hepburn plays a man in "Christopher Strong" in 1933 and again in "Sylvia Scarlett" (1936). Hollywood films followed the same trend set by audiences who flocked to Harlem to see edgy shows that suggested bisexuality. Overt female homosexuality was introduced in 1929's "Pandora's Box" between Louise Brooks and Alice Roberts. However, the development of the Hays Code in 1930 censored most references to homosexuality from film under the umbrella term "sex perversion". German films depicted homosexuality and were distributed throughout Europe, but 1931's "Mädchen in Uniform" was not distributed in the U.S. because of the depiction of an adolescent's love for a female teacher in boarding school. Because of the Hays Code, lesbianism after 1930 was absent from most films, even those adapted with overt lesbian characters or plot devices. Lillian Hellman's play "The Children's Hour" was converted into a heterosexual love triangle and retitled "These Three". Biopic "Queen Christina" in 1933, starring Greta Garbo, veiled most of the speculation about Christina of Sweden's affairs with women. Homosexuality or lesbianism was never mentioned outright in the films while the Hays Code was enforced. The reason censors stated for removing a lesbian scene in 1954's "The Pit of Loneliness" was that it was, "Immoral, would tend to corrupt morals". The code was relaxed somewhat after 1961, and the next year William Wyler remade "The Children's Hour" with Audrey Hepburn and Shirley MacLaine. After MacLaine's character admits her love for Hepburn's, she hangs herself; this set a precedent for miserable endings in films addressing homosexuality. Gay characters also were often killed off at the end, such as the death of Sandy Dennis' character at the end of "The Fox" in 1968. If not victims, lesbians were depicted as villains or morally corrupt, such as portrayals of brothel madames by Barbara Stanwyck in "Walk on the Wild Side" from 1962 and Shelley Winters in "The Balcony" in 1963. Lesbians as predators were presented in "Rebecca" (1940), women's prison films like "Caged" (1950), or in the character Rosa Klebb in "From Russia with Love" (1963). Lesbian vampire themes have reappeared in "Dracula's Daughter" (1936), "Blood and Roses" (1960), "Vampyros Lesbos" (1971), and "The Hunger" (1983). "Basic Instinct" (1992) featured a bisexual murderer played by Sharon Stone; it was one of several films that set off a storm of protests about the depiction of gays as predators. The first film to address lesbianism with significant depth was "The Killing of Sister George" in 1968, which was filmed in The Gateways Club, a longstanding lesbian pub in London. It is the first to claim a film character who identifies as a lesbian, and film historian Vito Russo considers the film a complex treatment of a multifaceted character who is forced into silence about her openness by other lesbians. "Personal Best" in 1982, and "Lianna" in 1983 treat the lesbian relationships more sympathetically and show lesbian sex scenes, though in neither film are the relationships happy ones. "Personal Best" was criticized for engaging in the cliched plot device of one woman returning to a relationship with a man, implying that lesbianism is a phase, as well as treating the lesbian relationship with "undisguised voyeurism". More ambiguous portrayals of lesbian characters were seen in "Silkwood" (1983), "The Color Purple" (1985), and "Fried Green Tomatoes" (1991), despite explicit lesbianism in the source material. An era of independent filmmaking brought different stories, writers, and directors to films. "Desert Hearts" arrived in 1985, to be one of the most successful. Directed by lesbian Donna Deitch, it is loosely based on Jane Rule's novel "Desert of the Heart". It received mixed critical commentary, but earned positive reviews from the gay press. The late 1980s and early 1990s ushered in a series of films treating gay and lesbian issues seriously, made by gays and lesbians, nicknamed New Queer Cinema. Films using lesbians as a subject included Rose Troche's avant garde romantic comedy "Go Fish" (1994) and the first film about African American lesbians, Cheryl Dunye's "The Watermelon Woman", in 1995. Realism in films depicting lesbians developed further to include romance stories such as "The Incredibly True Adventure of Two Girls in Love" and "When Night Is Falling", both in 1995, "Better Than Chocolate" (1999), and the social satire "But I'm a Cheerleader" (also in 1999). A twist on the lesbian-as-predator theme was the added complexity of motivations of some lesbian characters in Peter Jackson's "Heavenly Creatures" (1994), the Oscar-winning biopic of Aileen Wuornos, "Monster" (2003), and the exploration of fluid sexuality and gender in "Chasing Amy" (1997), "Kissing Jessica Stein" (2001), and "Boys Don't Cry" (1999). The film "V for Vendetta" shows a dictatorship in future Britain that forces lesbians, homosexuals, and other "unwanted" people in society to be systematically slaughtered in Nazi concentration camps. In the film, a lesbian actress named Valerie, who was killed in such a manner, serves as inspiration for the masked rebel V and his ally Evey Hammond, who set out to overthrow the dictatorship. The first stage production to feature a lesbian kiss and open depiction of two women in love is the 1907 Yiddish play "God of Vengeance" ("Got fun nekome") by Sholem Asch. Rivkele, a young woman, and Manke, a prostitute in her father's brothel, fall in love. On March 6, 1923, during a performance of the play in a New York City theatre, producers and cast were informed that they had been indicted by a Grand Jury for violating the Penal Code that defined the presentation of "an obscene, indecent, immoral and impure theatrical production." They were arrested the following day when they appeared before a judge. Two months later, they were found guilty in a jury trial. The producers were fined $200 and the cast received suspended sentences. The play is considered by some to be "the greatest drama of the Yiddish theater". "God of Vengeance" was the inspiration for the 2015 play "Indecent" by Paula Vogel, which features lesbian characters Rifkele and Manke. "Indecent" was nominated for the 2017 Tony Award for Best Play and Drama Desk Award for Outstanding Play. Broadway musical "The Prom" featured lesbian characters Emma Nolan and Alyssa Greene. In 2019, the production was nominated for six Tony Awards, including Best Musical, and received the Drama Desk Award for Outstanding Musical. A performance from "The Prom" was included in the 2018 Macy's Thanksgiving Day Parade and made history by showing the first same-sex kiss in the parade's broadcast. "Jagged Little Pill" featured lesbian characters Frankie Healy and Jo. Television began to address homosexuality much later than film. Local talk shows in the late 1950s first addressed homosexuality by inviting panels of experts (usually not gay themselves) to discuss the problems of gay men in society. Lesbianism was rarely included. The first time a lesbian was portrayed on network television was the NBC drama "The Eleventh Hour" in the early 1960s, in a teleplay about an actress who feels she is persecuted by her female director, and in distress, calls a psychiatrist who explains she is a latent lesbian who has deep-rooted guilt about her feelings for women. When she realizes this, however, she is able to pursue heterosexual relationships, which are portrayed as "healthy". Invisibility for lesbians continued in the 1970s when homosexuality became the subject of dramatic portrayals, first with medical dramas ("", "Marcus Welby, M.D.", "Medical Center") featuring primarily male patients coming out to doctors, or staff members coming out to other staff members. These shows allowed homosexuality to be discussed clinically, with the main characters guiding troubled gay characters or correcting homophobic antagonists, while simultaneously comparing homosexuality to psychosis, criminal behavior, or drug use. Another stock plot device in the 1970s was the gay character in a police drama. They served as victims of blackmail or anti-gay violence, but more often as criminals. Beginning in the late 1960s with "N.Y.P.D.", "Police Story", and "Police Woman", the use of homosexuals in stories became much more prevalent, according to Vito Russo, as a response to their higher profiles in gay activism. Lesbians were included as villains, motivated to murder by their desires, internalized homophobia, or fear of being exposed as homosexual. One episode of "Police Woman" earned protests by the National Gay Task Force before it aired for portraying a trio of murderous lesbians who killed retirement home patients for their money. NBC edited the episode because of the protests, but a sit-in was staged in the head of NBC's offices. In the middle of the 1970s, gay men and lesbians began to appear as police officers or detectives facing coming out issues. This did not extend to CBS' groundbreaking show "Cagney & Lacey" in 1982, starring two female police detectives. CBS production made conscious attempts to soften the characters so they would not appear to be lesbians. In 1991, a bisexual lawyer portrayed by Amanda Donohoe on "L.A. Law" shared the first significant lesbian kiss on primetime television with Michele Greene, stirring a controversy despite being labeled "chaste" by "The Hollywood Reporter". Though television did not begin to use recurring homosexual characters until the late 1980s, some early situation comedies used a stock character that author Stephen Tropiano calls "gay-straight": supporting characters who were quirky, did not comply with gender norms, or had ambiguous personal lives, that "for all purposes "should" be gay". These included Zelda from "The Many Loves of Dobie Gillis", Miss Hathaway from "The Beverly Hillbillies", and Jo from "The Facts of Life". In the mid-1980s through the 1990s, sitcoms frequently employed a "coming out" episode, where a friend of one of the stars admits she is a lesbian, forcing the cast to deal with the issue. "Designing Women", "The Golden Girls", and "Friends" used this device with women in particular. Recurring lesbian characters who came out were seen on "Married... with Children", "Mad About You", and "Roseanne", in which a highly publicized episode had ABC executives afraid a televised kiss between Roseanne and Mariel Hemingway would destroy ratings and ruin advertising. The episode was instead the week's highest rated. By far the sitcom with the most significant impact to the image of lesbians was "Ellen". Publicity surrounding Ellen's coming out episode in 1997 was enormous; Ellen DeGeneres appeared on the cover of "Time" magazine the week before the airing of "The Puppy Episode" with the headline "Yep, I'm Gay". Parties were held in many U.S. cities to watch the episode, and the opposition from conservative organizations was intense. WBMA-LP, the ABC affiliate in Birmingham, Alabama, even refused to air the first run of the episode, citing conservative values of the local viewing audience, which earned the station some infamy and ire in the LGBT community. Even still, "The Puppy Episode" won an Emmy for writing, but as the show began to deal with Ellen Morgan's sexuality each week, network executives grew uncomfortable with the direction the show took and canceled it. Dramas following "L.A. Law" began incorporating homosexual themes, particularly with continuing storylines on "Relativity", "Picket Fences", "ER", and "" and "", both of which tested the boundaries of sexuality and gender. A show directed at adolescents that had a particularly strong cult following was "Buffy the Vampire Slayer". In the fourth season of "Buffy", Tara and Willow admit their love for each other without any special fanfare and the relationship is treated as are the other romantic relationships on the show. What followed was a series devoted solely to gay characters from network television. Showtime's American rendition of "Queer as Folk" ran for five years, from 2000 to 2005; two of the main characters were a lesbian couple. Showtime promoted the series as "No Limits", and "Queer as Folk" addressed homosexuality graphically. The aggressive advertising paid off as the show became the network's highest rated, doubling the numbers of other Showtime programs after the first season. In 2004, Showtime introduced "The L Word", a dramatic series devoted to a group of lesbian and bisexual women, running its final season in 2009. The invisibility of lesbians has gradually eroded since the early 1980s. This is in part due to public figures who have caused speculation and comment in the press about their sexuality and lesbianism in general. The primary figure earning this attention was Martina Navratilova, who served as tabloid fodder for years as she denied being lesbian, admitted to being bisexual, had very public relationships with Rita Mae Brown and Judy Nelson, and acquired as much press about her sexuality as she did her athletic achievements. Navratilova spurred what scholar Diane Hamer termed "constant preoccupation" in the press with determining the root of same-sex desire. Other public figures acknowledged their homosexuality and bisexuality, notably musicians k.d. lang and Melissa Etheridge, and Madonna's pushing of sexual boundaries in her performances and publications. In 1993, lang and self-professed heterosexual supermodel Cindy Crawford posed for the August cover of "Vanity Fair" in a provocative arrangement that showed Crawford shaving lang's face, as lang lounged in a barber's chair wearing a pinstripe suit. The image "became an internationally recognized symbol of the phenomenon of lesbian chic", according to Hamer. The year 1994 marked a rise in lesbian visibility, particularly appealing to women with feminine appearances. Between 1992 and 1994, "Mademoiselle", "Vogue", "Cosmopolitan", "Glamour", "Newsweek", and "New York" magazines featured stories about women who admitted sexual histories with other women. One analyst reasoned the recurrence of lesbian chic was due to the often-used homoerotic subtexts of gay male subculture being considered off limits because of AIDS in the late 1980s and 1990s, joined with the distant memory of lesbians as they appeared in the 1970s: unattractive and militant. In short, lesbians became more attractive to general audiences when they ceased having political convictions. All the attention on feminine and glamorous women created what culture analyst Rodger Streitmatter characterizes as an unrealistic image of lesbians packaged by heterosexual men; the trend influenced an increase in the inclusion of lesbian material in pornography aimed at men. A resurgence of lesbian visibility and sexual fluidity was noted in 2009, with celebrities such as Cynthia Nixon and Lindsay Lohan commenting openly on their relationships with women, and reality television addressing same-sex relationships. Psychiatrists and feminist philosophers write that the rise in women acknowledging same-sex relationships is due to growing social acceptance, but also concede that "only a certain kind of lesbian—slim and elegant or butch in just the right androgynous way—is acceptable to mainstream culture". Although homosexuality among females has taken place in many cultures in history, a recent phenomenon is the development of family among same-sex partners. Before the 1970s, the idea that same-sex adults formed long-term committed relationships was unknown to many people. The majority of lesbians (between 60% and 80%) report being in a long-term relationship. Sociologists credit the high number of paired women to gender role socialization: the inclination for women to commit to relationships doubles in a lesbian union. Unlike heterosexual relationships that tend to divide work based on sex roles, lesbian relationships divide chores evenly between both members. Studies have also reported that emotional bonds are closer in lesbian and gay relationships than heterosexual ones. Family issues were significant concerns for lesbians when gay activism became more vocal in the 1960s and 1970s. Custody issues in particular were of interest since often courts would not award custody to mothers who were openly homosexual, even though the general procedure acknowledged children were awarded to the biological mother. Several studies performed as a result of custody disputes viewed how children grow up with same-sex parents compared to single mothers who did not identify as lesbians. They found that children's mental health, happiness, and overall adjustment is similar to children of divorced women who are not lesbians. Sexual orientation, gender identity, and sex roles of children who grow up with lesbian mothers are unaffected. Differences that were found include the fact that divorced lesbians tend to be living with a partner, fathers visit divorced lesbian mothers more often than divorced nonlesbian mothers, and lesbian mothers report a greater fear of losing their children through legal means. Improving opportunities for growing families of same-sex couples has shaped the political landscape within the past ten years. A push for same-sex marriage or civil unions in western countries has replaced other political objectives. , ten countries and six U.S. states offer same-sex marriage; civil unions are offered as an option in some European countries, U.S. states and individual municipalities. The ability to adopt domestically or internationally children or provide a home as a foster parent is also a political and family priority for many lesbians, as is improving access to artificial insemination. Lesbians of color have often been a marginalized group, including African American, Latina, Asian, Arab, and other non-Caucasian lesbians; and experienced racism, homophobia, and misogyny due to their many identities. Some scholars have noted that in the past the predominant lesbian community was largely composed of white women and influenced by American culture, leading some lesbians of color to experience difficulties integrating into the community at large. Many lesbians of color have stated that they were often systematically excluded from lesbian spaces based on the fact that they are women of color. Additionally, these women face a unique set of challenges within their respective racial communities. Many feel abandoned, as communities of color often view homosexual identity as a "white" lifestyle and see the acceptance of homosexuality as a setback in achieving equality. Lesbians of color, especially those of immigrant populations, often hold the sentiment that their sexual orientation identity adversely affects assimilation into the dominant culture. Historically, women of color were often excluded from participating in lesbian and gay movements. Scholars have stated that this exclusion came as a result of the majority of whites dismissing the intersections of gender, race, and sexuality that are a core part of the lesbian of color identity. Lesbians that organized events were mostly white and middle-class, and largely focused their political movements on the issues of sexism and homophobia, rather than class or race issues. The early lesbian feminist movement was criticized for excluding race and class issues from their spaces and for a lack of focus on issues that did not benefit white women. Audre Lorde, Barbara Smith, and Cherrie Moraga are cited as major theorists within the various lesbians of color movements for their insistence on inclusion and equality, from both racial communities and white lesbian communities. The many intersections surrounding lesbians of color can often contribute to an increased need for mental health resources. Lesbians of color are more likely to experience a number of psychological issues due to the various experiences of sexism, racism, and homophobia as a part of their existence. Mental health providers, such as therapists, often use heteronormative standards to gauge the health of lesbian relationships, and the relationships of lesbian women of color are often subjects of judgment because they are seen as the most deviant. Within racial communities, the decision to come out can be costly, as the threat of loss of support from family, friends, and the community at large is probable. Lesbians of color are often exposed to a range of adverse consequences, including microaggression, discrimination, menace, and violence.
https://en.wikipedia.org/wiki?curid=17846
Lamiaceae The Lamiaceae ( ) or Labiatae are a family of flowering plants commonly known as the mint or deadnettle or sage family. Many of the plants are aromatic in all parts and include widely used culinary herbs like basil, mentha, rosemary, sage, savory, marjoram, oregano, hyssop, thyme, lavender, and perilla. Some species are shrubs, trees (such as teak), or, rarely, vines. Many members of the family are widely cultivated, not only for their aromatic qualities, but also their ease of cultivation, since they are readily propagated by stem cuttings. Besides those grown for their edible leaves, some are grown for decorative foliage. Others are grown for seed, such as "Salvia hispanica" (chia), or for their edible tubers, such as "Plectranthus edulis", "Plectranthus esculentus", "Plectranthus rotundifolius", and "Stachys affinis" (Chinese artichoke). The family has a cosmopolitan distribution. The enlarged Lamiaceae contain about 236 genera and have been stated to contain 6,900 to 7,200 species, but the World Checklist lists 7,534. The largest genera are "Salvia" (900), "Scutellaria" (360), "Stachys" (300), "Plectranthus" (300), "Hyptis" (280), "Teucrium" (250), "Vitex" (250), "Thymus" (220), and "Nepeta" (200). "Clerodendrum" was once a genus of over 400 species, but by 2010, it had been narrowed to about 150. The family has traditionally been considered closely related to the Verbenaceae; in the 1990s, phylogenetic studies suggested that many genera classified in the Verbenaceae should be classified in the Lamiaceae or to other families in the order Lamiales. The alternative family name Labiatae refers to the fact that the flowers typically have petals fused into an upper lip and a lower lip ("" in Latin). The flowers are bilaterally symmetrical with five united petals and five united sepals. They are usually bisexual and verticillastrate (a flower cluster that looks like a whorl of flowers, but actually consists of two crowded clusters). Although this is still considered an acceptable alternative name, most botanists now use the name Lamiaceae in referring to this family. The leaves emerge oppositely, each pair at right angles to the previous one (decussate) or whorled. The stems are frequently square in cross section, but this is not found in all members of the family, and is sometimes found in other plant families. The last revision of the entire family was published in 2004. It described and provided keys to 236 genera. These are marked with an asterisk (*) in the list below. A few genera have been established or resurrected since 2004. These are marked with a plus sign (+). The remaining genera in the list are mostly of historical interest only and are from a source that includes such genera without explanation. Few of these are recognized in modern treatments of the family. Kew Gardens provides a list of genera that includes additional information. A list at the Angiosperm Phylogeny Website is frequently updated. The circumscription of several genera has changed since 2004. "Tsoongia", "Paravitex", and "Viticipremna" have been sunk into synonymy with "Vitex". "Huxleya" has been sunk into "Volkameria". "Kalaharia", "Volkameria", "Ovieda", and "Tetraclea" have been segregated from a formerly polyphyletic "Clerodendrum". "Rydingia" has been separated from "Leucas". The remaining "Leucas" is paraphyletic over four other genera. In 2004, the Lamiaceae were divided into seven subfamilies with 10 genera not placed in any of the subfamilies. The unplaced genera are: "Tectona", "Callicarpa", "Hymenopyramis", "Petraeovitex", "Peronema", "Garrettia", "Cymaria", "Acrymia", "Holocheila", and "Ombrocharis". The subfamilies are the Symphorematoideae, Viticoideae, Ajugoideae, Prostantheroideae, Nepetoideae, Scutellarioideae, and Lamioideae. The subfamily Viticoideae is probably not monophyletic. The Prostantheroideae and Nepetoideae are divided into tribes. These are shown in the phylogenetic tree below. Most of the genera of Lamiaceae have never been sampled for DNA for molecular phylogenetic studies. Most of those that have been are included in the following phylogenetic tree. The phylogeny depicted below is based on seven different sources.
https://en.wikipedia.org/wiki?curid=17856
Wide shot In photography, filmmaking and video production, a wide shot (sometimes referred to as a full shot or long shot) typically shows the entire object or human figure and is usually intended to place it in some relation to its surroundings. These are typically shot now using wide-angle lenses (an approximately 25 mm lens in 35 mm photography and 10 mm lens in 16 mm photography). However, due to sheer distance, establishing shots and extremely wide shots can use almost any camera type. This type of filmmaking was a result of filmmakers trying to retain the sense of the viewer watching a play in front of them, as opposed to just a series of pictures. The wide shot has been used since films have been made as it is a very basic type of cinematography. In 1878, one of the first true motion pictures, "Sallie Gardner at a Gallop", was released. Even though this wouldn't be considered a film in the current motion picture industry, it was a huge step towards complete motion pictures. It is arguable that it is very basic but it still remains that it was displayed as a wide angle as both the rider and horse are fully visible in the frame. After this innovation, in the 1880s celluloid photographic film and motion picture cameras became available so more motion pictures could be created in the form of Kinetoscope or through projectors. These early films also maintained a wide angle layout as it was the best way to keep everything visible for the viewer. Once motion pictures became more available in the 1890s there were public screenings of many different films only being around a minute long, or even less. These films again adhered to the wide shot style. One of the first competitive filming techniques came in the form of the close-up as George Albert Smith incorporated them into his film Hove. Though unconfirmed as the first usage of this method it is one of the earliest recorded examples. Once the introduction of new framing techniques were introduced then more and more were made and used for their benefits that they could provide that wide shots couldn't. This was the point at which motion pictures evolved from short, minute long, screening to becoming full-length motion pictures. More and more cinematic techniques appeared, resulting in the wide shot being less commonly used. However, it still remained as it is almost irreplaceable in what it can achieve. When television entered the home, it was seen as a massive hit to the cinema industry and many saw it as the decline in cinema popularity. This in turn resulted in films having to stay ahead of television by incorporating superior quality than that of a television. This was done by adding color but importantly it implemented the use of widescreen. This would allow a massive increase amount of space usable by the director, thus allowing an even wider shot for the viewer to witness more of whatever the director intends to evoke with any given shot. Most modern films will frequently use the different types of wide shots as they are a staple in filmmaking and are almost impossible to avoid unless deliberately chosen to. In the current climate of films, the technical quality of any given shot will appear with much better clarity which has given life to some incredible shots from modern cinema. Also, given the quality of modern home entertainment mediums such as Blu Ray, 3D and Ultra HD Blu Rays this has allowed the scope and size of any given frame to encompass more of the scene and environment in greater detail. There are a variety of ways of framing that are considered as being wide shots; these include: Many directors are known for their use of the variety of wide shots. A key example of them is the frequent use of establishing shots and very wide shots in Peter Jackson’s "The Lord of the Rings" trilogy showing the vast New Zealand landscape to instil awe in the audience. In the 1993 film "Schindler's List", there is a running image of a small girl trapped within a concentration camp wearing a red coat (the only colour in the film). She is frequently pictured in a wide shot format as a way to display both her and the horrific surroundings to build a disturbing contrast. In the 1939 film "The Wizard of Oz", a very wide shot is used that keeps all the protagonists on screen with the Wizard's palace in clear view. "The Wizard of Oz" was also one of the first mainstream motion pictures to include colour. The 1962 "Lawrence of Arabia" contains an enormous number of extreme wide shots which successfully induced the feeling of scale of the lead in his surrounding and aesthetically dwarfed him due to his surroundings making him seem more vulnerable and weak. In the 1981 the first film in the Indiana Jones film series, "Raiders of the Lost Ark", contains the use of a wide shot to show the dangerous scale of a boulder that is chasing the protagonist. The 2008 film "The Dark Knight" featured a practical stunt in which a large truck and trailer are flipped nose first. This is shot very far back to give the shot more clarity and to see the flip through its entirety as opposed to cutting midway through. In the 2015 Ridley Scott film "The Martian" the protagonist Mark Watney is stranded on Mars and the film contains many wide shots. These are used to show the Martian landscape and give the character the sense of isolation that the film would want.
https://en.wikipedia.org/wiki?curid=17859
Logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number  is the exponent to which another fixed number, the "base" , must be raised, to produce that number . In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g., since , the "logarithm base " of is , or . The logarithm of to "base" is denoted as , or without parentheses, , or even without the explicit base, , when no confusion is possible, or when the base does not matter such as in big O notation. More generally, exponentiation allows any positive real number as base to be raised to any real power, always producing a positive result, so for any two positive real numbers  and , where  is not equal to , is always a unique real number . More explicitly, the defining relation between exponentiation and logarithm is: For example, , as . The logarithm base (that is ) is called the common logarithm and has many applications in science and engineering. The natural logarithm has the number (that is ) as its base; its use is widespread in mathematics and physics, because of its simpler integral and derivative. The binary logarithm uses base (that is ) and is commonly used in computer science. Logarithms are examples of concave functions. Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because of the fact—important in its own right—that the logarithm of a product is the sum of the logarithms of the factors: provided that , and are all positive and . The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century, and who also introduced the letter as the base of natural logarithms. Logarithmic scales reduce wide-ranging quantities to tiny scopes. For example, the decibel (dB) is a unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They help to describe frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function applied to complex numbers. The modular discrete logarithm is another variant; it has uses in public-key cryptography. Addition, multiplication, and exponentiation are three of the most fundamental arithmetic operations. Addition, the simplest of these, is undone by subtraction: when you add to to get , to reverse this operation you need to "subtract" from . Multiplication, the next-simplest operation, is undone by division: if you multiply by to get , you then must divide by in order to return to the original expression . Logarithms also undo a fundamental arithmetic operation, exponentiation. Exponentiation is when you raise a number to a certain power. For example, raising to the power equals : The general case is when you raise a number to the power of to get : The number is referred to as the base of this expression. The base is the number that is raised to a particular power—in the above example, the base of the expression formula_9 is . It is easy to make the base the subject of the expression: all you have to do is take the root of both sides. This gives: It is less easy to make the subject of the expression. Logarithms allow us to do this: This expression means that is equal to the power that you need to raise to in order to get . This operation undoes exponentiation because the logarithm of tells you the "exponent" that the base has been raised to. This subsection contains a short overview of the exponentiation operation, which is fundamental to understanding logarithms. Raising to the power, where is a natural number, is done by multiplying factors equal to . The power of is written , so that Exponentiation may be extended to , where is a positive number and the "exponent" is any real number. For example, is the reciprocal of , that is, . Raising ' to the power 1/2 is the square root of '. More generally, raising " to a rational power , where and are integers, is given by the -th root of formula_14. Finally, any irrational number (a real number which is not rational) ' can be approximated to arbitrary precision by rational numbers. This can be used to compute the '-th power of ": for example formula_15 and formula_16 is increasingly well approximated by formula_17. A more detailed explanation, as well as the formula is contained in the article on exponentiation. The "logarithm" of a positive real number with respect to base is the exponent by which must be raised to yield . In other words, the logarithm of to base is the solution to the equation The logarithm is denoted "" (pronounced as "the logarithm of to base " or "the logarithm of " or (most commonly) "the log, base , of "). In the equation , the value is the answer to the question "To what power must be raised, in order to yield ?". Several important formulas, sometimes called "logarithmic identities" or "logarithmic laws", relate logarithms to one another. The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the -th power of a number is "" times the logarithm of the number itself; the logarithm of a -th root is the logarithm of the number divided by . The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions formula_21 or formula_22 in the left hand sides. The logarithm can be computed from the logarithms of and with respect to an arbitrary base using the following formula: Starting from the defining identity we can apply to both sides of this equation, to get Solving for formula_26 yields: showing the conversion factor from given formula_28-values to their corresponding formula_29-values to be formula_30 Typical scientific calculators calculate the logarithms to bases 10 and . Logarithms with respect to any base can be determined using either of these two logarithms by the previous formula: Given a number and its logarithm to an unknown base , the base is given by: which can be seen from taking the defining equation formula_24 to the power of formula_34 Among all choices for the base, three are particularly common. These are , (the irrational mathematical constant ≈ 2.71828), and (the binary logarithm). In mathematical analysis, the logarithm base is widespread because of analytical properties explained below. On the other hand, logarithms are easy to use for manual calculations in the decimal number system: Thus, is related to the number of decimal digits of a positive integer : the number of digits is the smallest integer strictly bigger than log10"x". For example, is approximately 3.15. The next integer is 4, which is the number of digits of 1430. Both the natural logarithm and the logarithm to base two are used in information theory, corresponding to the use of nats or bits as the fundamental units of information, respectively. Binary logarithms are also used in computer science, where the binary system is ubiquitous; in music theory, where a pitch ratio of two (the octave) is ubiquitous and the cent is the binary logarithm (scaled by 1200) of the ratio between two adjacent equally-tempered pitches in European classical music; and in photography to measure exposure values. The following table lists common notations for logarithms to these bases and the fields where they are used. Many disciplines write instead of , when the intended base can be determined from the context. The notation also occurs. The "ISO notation" column lists designations suggested by the International Organization for Standardization (ISO 31-11). Because the notation has been used for all three bases (or when the base is indeterminate or immaterial), the intended base must often be inferred based on context or discipline. In computer science usually refers to , and in mathematics usually refers to . In other contexts often means . The history of logarithm in seventeenth-century Europe is the discovery of a new function that extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded by John Napier in 1614, in a book titled "Mirifici Logarithmorum Canonis Descriptio" ("Description of the Wonderful Rule of Logarithms"). Prior to Napier's invention, there had been other techniques of similar scopes, such as the prosthaphaeresis or the use of tables of progressions, extensively developed by Jost Bürgi around 1600. Napier coined the term for logarithm in Middle Latin, "logarithmorum," derived from the Greek, literally meaning, "ratio-number," from "logos" "proportion, ratio, word" + "arithmos" "number". The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the "order of a number". The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities. Such methods are called prosthaphaeresis. Invention of the function now known as natural logarithm began as an attempt to perform a quadrature of a rectangular hyperbola by Grégoire de Saint-Vincent, a Belgian Jesuit residing in Prague. Archimedes had written "The Quadrature of the Parabola" in the third century BC, but a quadrature for the hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the logarithm provides between a geometric progression in its argument and an arithmetic progression of values, prompted A. A. de Sarasa to make the connection of Saint-Vincent's quadrature and the tradition of logarithms in prosthaphaeresis, leading to the term "hyperbolic logarithm", a synonym for natural logarithm. Soon the new function was appreciated by Christiaan Huygens, and James Gregory. The notation Log y was adopted by Leibniz in 1675, and the next year he connected it to the integral formula_36 By simplifying difficult calculations before calculators and computers became available, logarithms contributed to the advance of science, especially astronomy. They were critical to advances in surveying, celestial navigation, and other domains. Pierre-Simon Laplace called logarithms As the function is the inverse function of log"b""x", it has been called the antilogarithm. A key tool that enabled the practical use of logarithms was the "table of logarithms". The first such table was compiled by Henry Briggs in 1617, immediately after Napier's invention but with the innovation of using 10 as the base. Briggs' first table contained the common logarithms of all integers in the range 1–1000, with a precision of 14 digits. Subsequently, tables with increasing scope were written. These tables listed the values of for any number in a certain range, at a certain precision. Base-10 logarithms were universally used for computation, hence the name common logarithm, since numbers that differ by factors of 10 have logarithms that differ by integers. The common logarithm of can be separated into an integer part and a fractional part, known as the characteristic and mantissa. Tables of logarithms need only include the mantissa, as the characteristic can be easily determined by counting digits from the decimal point. The characteristic of is one plus the characteristic of , and their mantissas are the same. Thus using a three-digit log table, the logarithm of 3542 is approximated by Greater accuracy can be obtained by interpolation: The value of can be determined by reverse look up in the same table, since the logarithm is a monotonic function. The product and quotient of two positive numbers and ' were routinely calculated as the sum and difference of their logarithms. The product ' or quotient came from looking up the antilogarithm of the sum or difference, via the same table: and For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities. Calculations of powers and roots are reduced to multiplications or divisions and look-ups by and Trigonometric calculations were facilitated by tables that contained the common logarithms of trigonometric functions. Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation. The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms, as illustrated here: For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables. A deeper study of logarithms requires the concept of a "function". A function is a rule that, given one number, produces another number. An example is the function producing the -th power of from any real number , where the base is a fixed number. This function is written: formula_43 To justify the definition of logarithms, it is necessary to show that the equation has a solution and that this solution is unique, provided that is positive and that is positive and unequal to 1. A proof of that fact requires the intermediate value theorem from elementary calculus. This theorem states that a continuous function that produces two values ' and ' also produces any value that lies between ' and '. A function is "continuous" if it does not "jump", that is, if its graph can be drawn without lifting the pen. This property can be shown to hold for the function . Because "" takes arbitrarily large and arbitrarily small positive values, any number lies between and for suitable and . Hence, the intermediate value theorem ensures that the equation has a solution. Moreover, there is only one solution to this equation, because the function "f" is strictly increasing (for ), or strictly decreasing (for ). The unique solution is the logarithm of to base , . The function that assigns to its logarithm is called "logarithm function" or "logarithmic function" (or just "logarithm"). The function is essentially characterized by the product formula More precisely, the logarithm to any base is the only increasing function "f" from the positive reals to the reals satisfying and The formula for the logarithm of a power says in particular that for any number , In prose, taking the power of and then the logarithm gives back . Conversely, given a positive number , the formula says that first taking the logarithm and then exponentiating gives back . Thus, the two possible ways of combining (or composing) logarithms and exponentiation give back the original number. Therefore, the logarithm to base is the "inverse function" of . Inverse functions are closely related to the original functions. Their graphs correspond to each other upon exchanging the - and the -coordinates (or upon reflection at the diagonal line = ), as shown at the right: a point on the graph of "f" yields a point on the graph of the logarithm and vice versa. As a consequence, diverges to infinity (gets bigger than any given number) if grows to infinity, provided that is greater than one. In that case, is an increasing function. For , tends to minus infinity instead. When approaches zero, goes to minus infinity for (plus infinity for , respectively). Analytic properties of functions pass to their inverses. Thus, as is a continuous and differentiable function, so is . Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the derivative of evaluates to by the properties of the exponential function, the chain rule implies that the derivative of is given by That is, the slope of the tangent touching the graph of the logarithm at the point equals . The derivative of ln is 1/"x"; this implies that ln is the unique antiderivative of that has the value 0 for . It is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of the constant . The derivative with a generalised functional argument is The quotient at the right hand side is called the logarithmic derivative of "f". Computing by means of the derivative of is known as logarithmic differentiation. The antiderivative of the natural logarithm is: Related formulas, such as antiderivatives of logarithms to other bases can be derived from this equation using the change of bases. The natural logarithm of " equals the integral of   from 1 to : In other words, equals the area between the axis and the graph of the function , ranging from to (figure at the right). This is a consequence of the fundamental theorem of calculus and the fact that the derivative of is . The right hand side of this equation can serve as a definition of the natural logarithm. Product and power logarithm formulas can be derived from this definition. For example, the product formula is deduced as: The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor "t" and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function again. Therefore, the left hand blue area, which is the integral of from "t" to "tu" is the same as the integral from 1 to "u". This justifies the equality (2) with a more geometric proof. The power formula may be derived in a similar way: The second equality uses a change of variables (integration by substitution), . The sum over the reciprocals of natural numbers, is called the harmonic series. It is closely tied to the natural logarithm: as "n" tends to infinity, the difference, converges (i.e., gets arbitrarily close) to a number known as the Euler–Mascheroni constant . This relation aids in analyzing the performance of algorithms such as quicksort. There are also some other integral representations of the logarithm that are useful in some situations: The first identity can be verified by showing that it has the same value at , and the same derivative. The second identity can be proven by writing and then inserting the Laplace transform of (and ). Real numbers that are not algebraic are called transcendental; for example, and "e" are such numbers, but formula_60 is not. Almost all real numbers are transcendental. The logarithm is an example of a transcendental function. The Gelfond–Schneider theorem asserts that logarithms usually take transcendental, i.e., "difficult" values. Logarithms are easy to compute in some cases, such as . In general, logarithms can be calculated using power series or the arithmetic–geometric mean, or be retrieved from a precalculated logarithm table that provides a fixed precision. Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently. Using look-up tables, CORDIC-like methods can be used to compute logarithms by using only the operations of addition and bit shifts. Moreover, the binary logarithm algorithm calculates recursively, based on repeated squarings of , taking advantage of the relation For any real number that satisfies , the following formula holds: This is a shorthand for saying that can be approximated to a more and more accurate value by the following expressions: For example, with the third approximation yields 0.4167, which is about 0.011 greater than . This series approximates with arbitrary precision, provided the number of summands is large enough. In elementary calculus, is therefore the limit of this series. It is the Taylor series of the natural logarithm at . The Taylor series of provides a particularly useful approximation to when is small, , since then For example, with the first-order approximation gives , which is less than 5% off the correct value 0.0953. Another series is based on the area hyperbolic tangent function: for any real number . Using sigma notation, this is also written as This series can be derived from the above Taylor series. It converges more quickly than the Taylor series, especially if is close to 1. For example, for , the first three terms of the second series approximate with an error of about . The quick convergence for close to 1 can be taken advantage of in the following way: given a low-accuracy approximation and putting the logarithm of is: The better the initial approximation is, the closer is to 1, so its logarithm can be calculated efficiently. can be calculated using the exponential series, which converges quickly provided is not too large. Calculating the logarithm of larger can be reduced to smaller values of by writing , so that . A closely related method can be used to compute the logarithm of integers. Putting formula_69 in the above series, it follows that: If the logarithm of a large integer is known, then this series yields a fast converging series for , with a rate of convergence of formula_71. The arithmetic–geometric mean yields high precision approximations of the natural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work is approximated to a precision of (or "p" precise bits) by the following formula (due to Carl Friedrich Gauss): Here denotes the arithmetic–geometric mean of and . It is obtained by repeatedly calculating the average formula_73 (arithmetic mean) and formula_74 (geometric mean) of and then let those two numbers become the next and . The two numbers quickly converge to a common limit which is the value of . "m" is chosen such that to ensure the required precision. A larger "m" makes the calculation take more steps (the initial x and y are farther apart so it takes more steps to converge) but gives more precision. The constants and can be calculated with quickly converging series. While at Los Alamos National Laboratory working on the Manhattan Project, Richard Feynman developed a bit-processing algorithm that is similar to long division and was later used in the Connection Machine. The algorithm uses the fact that every real number formula_76 is representable as a product of distinct factors of the form formula_77. The algorithm sequentially builds that product formula_78: if formula_79, then it changes formula_78 to formula_81. It then increase formula_82 by one regardless. The algorithm stops when formula_82 is large enough to give the desired accuracy. Because formula_84 is the sum of the terms of the form formula_85 corresponding to those formula_82 for which the factor formula_87 was included in the product formula_78, formula_84 may be computed by simple addition, using a table of formula_85 for all formula_82. Any base may be used for the logarithm table. Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of scale invariance. For example, each chamber of the shell of a nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a logarithmic spiral. Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function grows very slowly for large , logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation. Scientific quantities are often expressed as logarithms of other quantities, using a "logarithmic scale". For example, the decibel is a unit of measurement associated with logarithmic-scale quantities. It is based on the common logarithm of ratios—10 times the common logarithm of a power ratio or 20 times the common logarithm of a voltage ratio. It is used to quantify the loss of voltage levels in transmitting electrical signals, to describe power levels of sounds in acoustics, and the absorbance of light in the fields of spectrometry and optics. The signal-to-noise ratio describing the amount of unwanted noise in relation to a (meaningful) signal is also measured in decibels. In a similar vein, the peak signal-to-noise ratio is commonly used to assess the quality of sound and image compression methods using the logarithm. The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the moment magnitude scale or the Richter magnitude scale. For example, a 5.0 earthquake releases 32 times and a 6.0 releases 1000 times the energy of a 4.0. Another logarithmic scale is apparent magnitude. It measures the brightness of stars logarithmically. Yet another example is pH in chemistry; pH is the negative of the common logarithm of the activity of hydronium ions (the form hydrogen ions take in water). The activity of hydronium ions in neutral water is 10−7 mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about . Semilog (log-linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, exponential functions of the form appear as straight lines with slope equal to the logarithm of . Log-log graphs scale both axes logarithmically, which causes functions of the form to be depicted as straight lines with slope equal to the exponent "k". This is applied in visualizing and analyzing power laws. Logarithms occur in several laws describing human perception: Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have. Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the distance to and the size of the target. In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less realistic than more recent models, such as Stevens's power law.) Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10x as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly. Logarithms arise in probability theory: the law of large numbers dictates that, for a fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads approaches one-half. The fluctuations of this proportion about one-half are described by the law of the iterated logarithm. Logarithms also occur in log-normal distributions. When the logarithm of a random variable has a normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence. Logarithms are used for maximum-likelihood estimation of parametric statistical models. For such a model, the likelihood function depends on at least one parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the ""log likelihood""), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables. Benford's law describes the occurrence of digits in many data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is "d" (from 1 to 9) equals , "regardless" of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting. Analysis of algorithms is a branch of computer science that studies the performance of algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that divide a problem into smaller ones, and join the solutions of the subproblems. For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, comparisons, where "N" is the list's length. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time approximately proportional to . The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard uniform cost model. A function is said to grow logarithmically if is (exactly or approximately) proportional to the logarithm of . (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any natural number "N" can be represented in binary form in no more than bits. In other words, the amount of memory needed to store "N" grows logarithmically with "N". Entropy is broadly a measure of the disorder of some system. In statistical thermodynamics, the entropy "S" of some physical system is defined as The sum is over all possible states "i" of the system in question, such as the positions of gas particles in a container. Moreover, is the probability that the state "i" is attained and "k" is the Boltzmann constant. Similarly, entropy in information theory measures the quantity of information. If a message recipient may expect any one of "N" possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as bits. Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are chaotic in a deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive. Logarithms occur in definitions of the dimension of fractals. Fractals are geometric objects that are self-similar: small parts reproduce, at least roughly, the entire global structure. The Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the Hausdorff dimension of this structure . Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question. Logarithms are related to musical tones and intervals. In equal temperament, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual tones. For example, the note "A" has a frequency of 440 Hz and "B-flat" has a frequency of 466 Hz. The interval between "A" and "B-flat" is a semitone, as is the one between "B-flat" and "B" (frequency 493 Hz). Accordingly, the frequency ratios agree: Therefore, logarithms can be used to describe the intervals: an interval is measured in semitones by taking the logarithm of the frequency ratio, while the logarithm of the frequency ratio expresses the interval in cents, hundredths of a semitone. The latter is used for finer encoding, as it is needed for non-equal temperaments. Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in number theory. For any integer , the quantity of prime numbers less than or equal to is denoted . The prime number theorem asserts that is approximately given by in the sense that the ratio of and that fraction approaches 1 when tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and is prime is inversely proportional to the number of decimal digits of . A far better estimate of is given by the offset logarithmic integral function , defined by The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing and . The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm. The logarithm of "n" factorial, , is given by This can be used to obtain Stirling's formula, an approximation of for large "n". All the complex numbers that solve the equation are called "complex logarithms" of , when is (considered as) a complex number. A complex number is commonly represented as , where and are real numbers and is an imaginary unit, the square of which is −1. Such a number can be visualized by a point in the complex plane, as shown at the right. The polar form encodes a non-zero complex number by its absolute value, that is, the (positive, real) distance to the origin, and an angle between the real () axis "Re" and the line passing through both the origin and . This angle is called the argument of . The absolute value of is given by Using the geometrical interpretation of formula_99 and formula_100 and their periodicity in formula_101 any complex number may be denoted as for any integer number . Evidently the argument of is not uniquely specified: both and ' = + 2"k" are valid arguments of for all integers , because adding 2"k" radian or "k"⋅360° to corresponds to "winding" around the origin counter-clock-wise by turns. The resulting complex number is always , as illustrated at the right for . One may select exactly one of the possible arguments of as the so-called "principal argument", denoted , with a capital , by requiring to belong to one, conveniently selected turn, e.g., formula_103 or formula_104 These regions, where the argument of is uniquely determined are called "branches" of the argument function. Euler's formula connects the trigonometric functions sine and cosine to the complex exponential: Using this formula, and again the periodicity, the following identities hold: where is the unique real natural logarithm, denote the complex logarithms of , and is an arbitrary integer. Therefore, the complex logarithms of , which are all those complex values for which the power of equals , are the infinitely many values Taking such that formula_108 is within the defined interval for the principal arguments, then is called the "principal value" of the logarithm, denoted , again with a capital . The principal argument of any positive real number is 0; hence is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers do "not" generalize to the principal value of the complex logarithm. The illustration at the right depicts , confining the arguments of to the interval . This way the corresponding branch of the complex logarithm has discontinuities all along the negative real axis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other boundary in the same branch, when crossing a boundary, i.e., not changing to the corresponding -value of the continuously neighboring branch. Such a locus is called a branch cut. Dropping the range restrictions on the argument makes the relations "argument of ", and consequently the "logarithm of ", multi-valued functions. Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential. Another example is the "p"-adic logarithm, the inverse function of the "p"-adic exponential. Both are defined via Taylor series analogous to the real case. In the context of differential geometry, the exponential map maps the tangent space at a point of a manifold to a neighborhood of that point. Its inverse is also called the logarithmic (or log) map. In the context of finite groups exponentiation is given by repeatedly multiplying one group element with itself. The discrete logarithm is the integer "n" solving the equation where is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a routine that allows secure exchanges of cryptographic keys over unsecured information channels. Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field. Further logarithm-like inverse functions include the "double logarithm" ln(ln("x")), the "super- or hyper-4-logarithm" (a slight variation of which is called iterated logarithm in computer science), the Lambert W function, and the logit. They are the inverse functions of the double exponential function, tetration, of , and of the logistic function, respectively. From the perspective of group theory, the identity expresses a group isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups. By means of that isomorphism, the Haar measure (Lebesgue measure) "dx" on the reals corresponds to the Haar measure on the positive reals. The non-negative reals not only have a multiplication, but also have addition, and form a semiring, called the probability semiring; this is in fact a semifield. The logarithm then takes multiplication to addition (log multiplication), and takes addition to log addition (LogSumExp), giving an isomorphism of semirings between the probability semiring and the log semiring. Logarithmic one-forms appear in complex analysis and algebraic geometry as differential forms with logarithmic poles. The polylogarithm is the function defined by It is related to the natural logarithm by . Moreover, equals the Riemann zeta function .
https://en.wikipedia.org/wiki?curid=17860
L. Ron Hubbard Lafayette Ronald Hubbard (March 13, 1911 – January 24, 1986) was an American author of science fiction and fantasy stories who founded the Church of Scientology. In 1950, Hubbard authored "" and established a series of organizations to promote Dianetics. In 1952, Hubbard lost the rights to Dianetics in bankruptcy proceedings, and he subsequently founded Scientology. Thereafter Hubbard oversaw the growth of the Church of Scientology into a worldwide organization. Born in Tilden, Nebraska, in 1911, Hubbard spent much of his childhood in Helena, Montana. After his father was posted to the U.S. naval base on Guam, Hubbard traveled to Asia and the South Pacific in the late 1920s. In 1930, Hubbard enrolled at George Washington University to study civil engineering but dropped out in his second year. He began his career as a prolific writer of pulp fiction stories and married Margaret "Polly" Grubb, who shared his interest in aviation. Hubbard was an officer in the Navy during World War II, where he briefly commanded two ships but was removed from command both times. The last few months of his active service were spent in a hospital, being treated for a variety of complaints. Scientology became increasingly controversial during the 1960s and came under intense media, government and legal pressure in a number of countries. During the late 1960s and early 1970s, Hubbard spent much of his time at sea on his personal fleet of ships as "Commodore" of the Sea Organization, an elite quasi-paramilitary group of Scientologists. Hubbard returned to the United States in 1975 and went into seclusion in the California desert after an unsuccessful attempt to take over the town of Clearwater, Florida. In 1978, Hubbard was convicted of fraud after he was tried "in absentia" by France. In the same year, eleven high-ranking members of Scientology were indicted on 28 charges for their role in the Church's Snow White Program, a systematic program of espionage against the United States government. One of the indicted was Hubbard's wife Mary Sue Hubbard, who was in charge of the program; L. Ron Hubbard was named an unindicted co-conspirator. Hubbard spent the remaining years of his life in seclusion in a luxury motorhome on a ranch in California, attended to by a small group of Scientology officials. He died at age 74 in January 1986. Following Hubbard's death, Scientology leaders announced that his body had become an impediment to his work and that he had decided to "drop his body" to continue his research on another planet. Though many of Hubbard's autobiographical statements have been found to be fictitious, the Church of Scientology describes Hubbard in hagiographic terms and rejects any suggestion that its account of Hubbard's life is not historical fact. L. Ron Hubbard was born in 1911, in Tilden, Nebraska., the only child of Ledora May ( Waterbury), who had trained as a teacher, and Harry Ross Hubbard, a former United States Navy officer. After moving to Kalispell, Montana, they settled in Helena in 1913. Hubbard's father rejoined the Navy in April 1917, during World War I, while his mother worked as a clerk for the state government. During the 1920s the Hubbards repeatedly relocated around the United States and overseas. Hubbard was active in the Boy Scouts in Washington, D.C. and earned the rank of Eagle Scout in 1924, two weeks after his 13th birthday. In 1925, Hubbard was enrolled as a freshman at Union High School, Bremerton, and the following year studied at Queen Anne High School in Seattle. In April 1927, Hubbard's father was posted to Guam, and that summer, Hubbard and his mother traveled to Guam with a brief stop-over in a couple of Chinese ports. He recorded his impressions of the places he visited and disdained the poverty of the inhabitants of Japan and China, whom he described as "gooks" and "lazy [and] ignorant". In September 1927, while living with grandparents, Hubbard enrolled at Helena High School, where he contributed to the school paper. On May 11, 1928 Hubbard was dropped from enrollment at Helena High due to failing grades. Hubbard left Helena and rejoined his parents in Guam in June 1928. Between October and December 1928, Hubbard's family and others traveled from Guam to China. Upon his return to Guam, Hubbard spent much of his time writing dozens of short stories and essays. Hubbard failed the Naval Academy entrance examination. In September 1929, Hubbard was enrolled at the Swavely Preparatory School in Manassas, Virginia, to prepare him for a second attempt at the examination. During his first semester at Swevely, Hubbard complained of eye strain and was diagnosed with myopia; this diagnosis precluded any enrollment in the Naval Academy. As an adult, Hubbard would write to himself: "Your eyes are getting progressively better. They became bad when you used them as an excuse to escape the naval academy". He was instead sent to Woodward School for Boys in Washington, D.C. to qualify for admission to George Washington University without having to sit for the entrance examination. He successfully graduated from the school in June 1930 and entered the University the following September. On September 24, 1930, Hubbard began studying civil engineering at George Washington University's School of Engineering, at the behest of his father. Academically, Hubbard did poorly: his transcripts show he failed many courses including atomic physics, though later in life he would claim to have been a nuclear physicist. In September 1931 he was placed on probation due to poor grades, and in April 1932 he again received a warning for his lack of academic achievement. During his first year, Hubbard helped organize the university Glider Club and was elected its president. During what would become Hubbard's final semester at GWU, he organized an ill-fated trip to the Caribbean for June 1932 to explore and film the pirate "strongholds and bivouacs of the Spanish Main" and to "collect whatever one collects for exhibits in museums". Amid multiple misfortunes and running low on funds, the ship's owners ordered it to return to Baltimore. Hubbard failed to return to University the following year. After his father volunteered him for a Red Cross relief effort, on October 23, 1932 Hubbard traveled to Puerto Rico. En route, Hubbard apparently "decided to abandon the Red Cross", instead opting to accompany a mineral surveyor in a futile bid to find gold. Hubbard returned from Puerto Rico to D.C. in February 1933. He struck up with a relationship with a fellow glider pilot named Margaret "Polly" Grubb. The two were married on April 13. She was already pregnant when they married, but had a miscarriage shortly afterwards; a few months later, she became pregnant again. On May 7, 1934, she gave birth prematurely to a son who was named Lafayette Ronald Hubbard, Jr., whose nickname was "Nibs". Their second child, Katherine May, was born on January 15, 1936. The Hubbards lived for a while in Laytonsville, Maryland, but were chronically short of money. Hubbard became a well-known and prolific writer for pulp fiction magazines during the 1930s. His literary career began with contributions to the George Washington University student newspaper, "The University Hatchet", as a reporter for a few months in 1931. Six of his pieces were published commercially during 1932 to 1933. The going rate for freelance writers at the time was only a cent a word, so Hubbard's total earnings from these articles would have been less than $100 (). The pulp magazine "Thrilling Adventures" became the first to publish one of his short stories, in February 1934. Over the next six years, pulp magazines published many of his short stories under a variety of pen names, including Winchester Remington Colt, Kurt von Rachen, René Lafayette, Joe Blitz and Legionnaire 148. Although he was best known for his fantasy and science fiction stories, Hubbard wrote in a wide variety of genres, including adventure fiction, aviation, travel, mysteries, westerns and even romance. Hubbard knew and associated with writers such as Isaac Asimov, Robert A. Heinlein, L. Sprague de Camp and A. E. van Vogt. In the spring of 1936 they moved to Bremerton, Washington. They lived there for a time with Hubbard's aunts and grandmother before finding a place of their own at nearby South Colby. According to one of his friends at the time, Robert MacDonald Ford, the Hubbards were "in fairly dire straits for money" but sustained themselves on the income from Hubbard's writing. His first full-length novel, "Buckskin Brigades", was published in 1937. He became a "highly idiosyncratic" writer of science fiction after being taken under the wing of editor John W. Campbell, who published many of Hubbard's short stories and also serialized a number of well-received novelettes that Hubbard wrote for Campbell's magazines "Unknown" and "Astounding Science Fiction". These included "Fear", "Final Blackout" and "Typewriter in the Sky". Science fiction newsletter Xignals reported that Hubbard wrote "over 100,000 words a month" during his peak. Martin Gardner asserted that his writing "[wa]s done at lightning speed." He wrote the script for "The Secret of Treasure Island", a 1938 Columbia Pictures movie serial. Hubbard spent an increasing amount of time in New York City, working out of a hotel room where his wife suspected him of carrying on affairs with other women. In April 1938, Hubbard reportedly underwent a dental procedure and reacted to the drug used in the procedure. According to his account, this triggered a revelatory near-death experience. Allegedly inspired by this experience, Hubbard composed a manuscript, which was never published, with working titles of "The One Command" or "Excalibur". Arthur J. Burks, who read the work in 1938, later recalled it discussed the "one command": to survive. This theme would be revisited in "Dianetics". Burks also recalled the work discussing the psychology of a lynch mob. Hubbard would later cite "Excalibur" as an early version of "Dianetics". According to Burks, Hubbard believed that "Excalibur" would "revolutionize everything" and that "it was somewhat more important, and would have a greater impact upon people, than the Bible." According to Burks, Hubbard "was so sure he had something 'away out and beyond' anything else that he had sent telegrams to several book publishers, telling them that he had written 'THE book' and that they were to meet him at Penn Station, and he would discuss it with them and go with whomever [sic] gave him the best offer." However, nobody bought the manuscript. Hubbard's failure to sell "Excalibur" depressed him; he told his wife in an October 1938 letter: "Writing action pulp doesn't have much agreement with what I want to do because it retards my progress by demanding incessant attention and, further, actually weakens my name. So you see I've got to do something about it and at the same time strengthen the old financial position." He went on: Forrest J Ackerman, later Hubbard's literary agent, recalled that Hubbard told him "whoever read it either went insane or committed suicide. And he said that the last time he had shown it to a publisher in New York, he walked into the office to find out what the reaction was, the publisher called for the reader, the reader came in with the manuscript, threw it on the table and threw himself out of the skyscraper window." In 1948, Hubbard would tell a convention of science fiction fans that "Excalibur" inspiration came during an operation in which he "died" for eight minutes. The manuscript later became part of Scientology mythology. An early 1950s Scientology publication offered signed "gold-bound and locked" copies for the sum of $1,500 apiece (). It warned that "four of the first fifteen people who read it went insane" and that it would be "[r]eleased only on sworn statement not to permit other readers to read it. Contains data not to be released during Mr. Hubbard's stay on earth." Hubbard joined The Explorers Club in February 1940 on the strength of his claimed explorations in the Caribbean and survey flights in the United States. He persuaded the club to let him carry its flag on an "Alaskan Radio-Experimental Expedition". The crew consisted of Hubbard and his wife aboard his ketch "Magician". The trip was plagued by problems and did not get any further than Ketchikan. The ship's engine broke down only two days after setting off in July 1940. Having underestimated the cost of the trip, he did not have enough money to repair the broken engine. He raised money by writing stories and contributing to the local radio station and eventually earned enough to fix the engine, making it back to Puget Sound on December 27, 1940. After returning from Alaska, Hubbard applied to join the United States Navy. His friend Robert MacDonald Ford, by now a State Representative for Washington, sent a letter of recommendation describing Hubbard as "one of the most brilliant men I have ever known". Ford later said that Hubbard had written the letter himself: "I don't know why Ron wanted a letter. I just gave him a letter-head and said, 'Hell, you're the writer, you write it!'" Hubbard was commissioned as a lieutenant junior grade in the United States Naval Reserve on July 19, 1941. By November, he was posted to New York for training as an intelligence officer. On December 18, he was posted to the Philippines and set out for the posting via Australia. While in Melbourne awaiting transport to Manilla, Hubbard was sent back to the United States. The U.S. naval attaché reported, "This officer is not satisfactory for independent duty assignment. He is garrulous and tries to give impressions of his importance. He also seems to think he has unusual ability in most lines. These characteristics indicate that he will require close supervision for satisfactory performance of any intelligence duty." After a brief stint censoring cables, Hubbard's request for sea duty was approved and he reported to a Neponset, Massachusetts, shipyard which was converting a trawler into a gunboat to be classified as . On September 25, 1942, the commandant of Boston Navy Yard informed Washington that, in his view, Hubbard was "not temperamentally fitted for independent command." Days later, on October 1, Hubbard was summarily relieved of his command. Hubbard was sent to submarine chaser training, and in 1943 was posted to Portland, Oregon, to take command of a submarine chaser, the , which was under construction. On May 18, USS "PC-815" sailed on her shakedown cruise, bound for San Diego. Only five hours into the voyage, Hubbard believed he had detected an enemy submarine. Hubbard spent the next 68 hours engaged in combat, until finally receiving orders to return to Astoria. Admiral Frank Jack Fletcher, commander of the Northwest Sea Frontier, concluded: "An analysis of all reports convinces me that there was no submarine in the area." Fletcher suggested Hubbard had mistaken a "known magnetic deposit" for an enemy sub. The following month, Hubbard unwittingly sailed "PC-815" into Mexican territorial waters and conducted gunnery practice off the Coronado Islands, in the belief that they were uninhabited and belonged to the United States. The Mexican government complained and Hubbard was relieved of command. A report written after the incident rated Hubbard as unsuitable for independent duties and "lacking in the essential qualities of judgment, leadership and cooperation". The report recommended he be assigned "duty on a large vessel where he can be properly supervised". After being relieved of command of "PC-815", Hubbard began reporting sick, citing a variety of ailments, including ulcers, malaria, and back pains. Hubbard was admitted to the San Diego naval hospital for observation—he would remain there for nearly three months. Years later, Hubbard would privately write to himself: "Your stomach trouble you used as an excuse to keep the Navy from punishing you. You are free of the Navy." In 1944, Hubbard was posted to Portland where was under construction. The ship was commissioned in July and Hubbard served as the navigation and training officer. Hubbard requested, and was granted, a transfer to the School of Military Government in Princeton. The night before his departure, the ship's log reports that "The Navigating Officer [Hubbard] reported to the OOD [Officer On Duty] that an attempt at sabatage [sic] had been made sometime between 1530–1600. A coke bottle filled with gasoline with a cloth wick inserted had been concealed among cargo which was to be hoisted aboard and stored in No 1 hold. It was discovered before being taken on board. ONI, FBI and NSD authorities reported on the scene and investigations were started." Hubbard attended school in Princeton until January 1945, when he was assigned to Monterey, California. In April, he again reported sick and was re-admitted to Oak Knoll Naval Hospital, Oakland. His complaints included "headaches, rheumatism, conjunctivitis, pains in his side, stomach aches, pains in his shoulder, arthritis, hemorrhoids". An October 1945 naval board found that Hubbard was "considered physically qualified to perform duty ashore, preferably within the continental United States". He was discharged from hospital on December 4, 1945, and transferred to inactive duty on February 17, 1946. Hubbard would ultimately resign his commission after the publication of "Dianetics", with effect from October 30, 1950. Hubbard's life underwent a turbulent period immediately after the war. According to his own account, he "was abandoned by family and friends as a supposedly hopeless cripple and a probable burden upon them for the rest of my days". His daughter Katherine presented a rather different version: his wife had refused to uproot their children from their home in Bremerton, Washington, to join him in California. Their marriage was by now in terminal difficulties and he chose to stay in California. In August 1945 Hubbard moved into the Pasadena mansion of John "Jack" Whiteside Parsons. A leading rocket propulsion researcher at the California Institute of Technology and a founder of the Jet Propulsion Laboratory, Parsons led a double life as an avid occultist and Thelemite, follower of the English ceremonial magician Aleister Crowley and leader of a lodge of Crowley's magical order, Ordo Templi Orientis (OTO). He let rooms in the house only to tenants who he specified should be "atheists and those of a Bohemian disposition". Hubbard befriended Parsons and soon became sexually involved with Parsons's 21-year-old girlfriend, Sara "Betty" Northrup. Despite this Parsons was very impressed with Hubbard and reported to Crowley: Hubbard, whom Parsons referred to in writing as "Frater H", became an enthusiastic collaborator in the Pasadena OTO. The two men collaborated on the "Babalon Working", a sex magic ritual intended to summon an incarnation of Babalon, the supreme Thelemite Goddess. It was undertaken over several nights in February and March 1946 in order to summon an "elemental" who would participate in further sex magic. As Richard Metzger describes it, The "elemental" arrived a few days later in the form of Marjorie Cameron, who agreed to participate in Parsons's rites. Soon afterwards, Parsons, Hubbard and Sara agreed to set up a business partnership, "Allied Enterprises", in which they invested nearly their entire savings—the vast majority contributed by Parsons. The plan was for Hubbard and Sara to buy yachts in Miami and sail them to the West Coast to sell for a profit. Hubbard had a different idea; he wrote to the U.S. Navy requesting permission to leave the country "to visit Central & South America & China" for the purposes of "collecting writing material"—in other words, undertaking a world cruise. Aleister Crowley strongly criticized Parsons's actions, writing: "Suspect Ron playing confidence trick—Jack Parsons weak fool—obvious victim prowling swindlers." Parsons attempted to recover his money by obtaining an injunction to prevent Hubbard and Sara leaving the country or disposing of the remnants of his assets. They attempted to sail anyway but were forced back to port by a storm. A week later, Allied Enterprises was dissolved. Parsons received only a $2,900 promissory note from Hubbard and returned home "shattered". He had to sell his mansion to developers soon afterwards to recoup his losses. Hubbard's fellow writers were well aware of what had happened between him and Parsons. L. Sprague de Camp wrote to Isaac Asimov on August 27, 1946, to tell him: On August 10, 1946, Hubbard bigamously married Sara, while still married to Polly. It was not until 1947 that his first wife learned that he had remarried. Hubbard agreed to divorce Polly in June that year and the marriage was dissolved shortly afterwards, with Polly given custody of the children. During this period, Hubbard authored a document which has called the "Affirmations" (also referred to as the "Admissions"). They consist of a series of statements by and addressed to Hubbard, relating to various physical, sexual, psychological and social issues that he was encountering in his life. The Affirmations appear to have been intended to be used as a form of self-hypnosis with the intention of resolving the author's psychological problems and instilling a positive mental attitude. In , Reitman called the Affirmations "the most revealing psychological self-assessment, complete with exhortations to himself, that [Hubbard] had ever made." Among the Affirmations: After Hubbard's wedding to Sara, the couple settled at Laguna Beach, California, where Hubbard took a short-term job looking after a friend's yacht before resuming his fiction writing to supplement the small disability allowance that he was receiving as a war veteran. Working from a trailer in a run-down area of North Hollywood, Hubbard sold a number of science fiction stories that included his "Ole Doc Methuselah" series and the serialized novels "The End Is Not Yet" and "To the Stars". However, he remained short of money and his son, L. Ron Hubbard Jr, testified later that Hubbard was dependent on his own father and Margaret's parents for money and his writings, which he was paid at a penny per word, never garnered him any more than $10,000 prior to the founding of Scientology. He repeatedly wrote to the Veterans Administration (VA) asking for an increase in his war pension. In October 1947 he wrote to request psychiatric treatment: The VA eventually did increase his pension, but his money problems continued. On August 31, 1948, he was arrested in San Luis Obispo, California, and subsequently pleaded guilty to a charge of petty theft, for which he was ordered to pay a $25 fine (). In 1948, Hubbard and his second wife Sara moved from California to Savannah, Georgia, where he would later claim to have worked as a volunteer lay practitioner in a local psychiatric clinic. In letters to friends, he began to make the first public mentions of what was to become Dianetics. He wrote in January 1949 that he was working on a "book of psychology" about "the cause and cure of nervous tension", which he was going to call "The Dark Sword", "Excalibur" or "Science of the Mind". On March 8, 1949, Hubbard wrote to friend and fellow science-fiction author Robert Heinlein from Savannah, Georgia. Hubbard referenced Heinlein's earlier work Coventry, in which a utopian government has the ability to psychologically "cure" criminals of violent personality traits. He told Heinlein: His first published articles in Dianetics were "Terra Incognita: The Mind" in "The Explorers Journal" and another one that impacted people more heavily in "Astounding Science Fiction". In April 1949, Hubbard wrote to several professional organizations to offer his research. None were interested, so he turned to his editor John W. Campbell, who was more receptive due to a long-standing fascination with fringe psychologies and psychic powers ("psionics") that "permeated both his fiction and non-fiction". Campbell invited Hubbard and Sara to move into a cottage at Bay Head, New Jersey, not far from his own home at Plainfield. In July 1949, Campbell recruited an acquaintance, Dr. Joseph Winter, to help develop Hubbard's new therapy of "Dianetics". Campbell told Winter: Hubbard collaborated with Campbell and Winter to refine his techniques, testing them on science fiction fans recruited by Campbell. The basic principle of Dianetics was that the brain recorded every experience and event in a person's life, even when unconscious. Bad or painful experiences were stored as what he called "engrams" in a "reactive mind". These could be triggered later in life, causing emotional and physical problems. By carrying out a process he called "auditing", a person could be regressed through his engrams to re-experiencing past experiences. This enabled engrams to be "cleared". The subject, who would now be in a state of "Clear", would have a perfectly functioning mind with an improved IQ and photographic memory. The "Clear" would be cured of physical ailments ranging from poor eyesight to the common cold, which Hubbard asserted were purely psychosomatic. Winter submitted a paper on Dianetics to the "Journal of the American Medical Association" and the "American Journal of Psychiatry" but both journals rejected it. Hubbard and his collaborators decided to announce Dianetics in Campbell's "Astounding Science Fiction" instead. In an editorial, Campbell said: "Its power is almost unbelievable; it proves the mind not only can but does rule the body completely; following the sharply defined basic laws set forth, physical ills such as ulcers, asthma and arthritis can be cured, as can all other psychosomatic ills." The birth of Hubbard's second daughter Alexis Valerie, delivered by Winter on March 8, 1950, came in the middle of the preparations to launch Dianetics. Shortly afterwards in April 1950, a "Hubbard Dianetic Research Foundation" was established in Elizabeth, New Jersey, with Hubbard, Sara, Winter and Campbell on the board of directors. Hubbard described Dianetics as "the hidden source of all psychosomatic ills and human aberration" when he introduced Dianetics to the world in the 1950s. He further claimed that "skills have been developed for their invariable cure." Dianetics was duly launched in "Astounding's" May 1950 issue and on May 9, Hubbard's companion book "" was published by Hermitage House. Hubbard abandoned freelance writing in order to promote Dianetics, writing several books about it in the next decade and delivering an estimated 4,000 lectures while founding Dianetics research organizations. Dianetics was an immediate commercial success and sparked what Martin Gardner calls "a nationwide cult of incredible proportions". By August 1950, Hubbard's book had sold 55,000 copies, was selling at the rate of 4,000 a week and was being translated into French, German and Japanese. Five hundred Dianetic auditing groups had been set up across the United States. Dianetics was poorly received by the press and the scientific and medical professions. The American Psychological Association criticized Hubbard's claims as "not supported by empirical evidence". "Scientific American" said that Hubbard's book contained "more promises and less evidence per page than any publication since the invention of printing", while "The New Republic" called it a "bold and immodest mixture of complete nonsense and perfectly reasonable common sense, taken from long acknowledged findings and disguised and distorted by a crazy, newly invented terminology". Some of Hubbard's fellow science fiction writers also criticized it; Isaac Asimov considered it "gibberish" while Jack Williamson called it "a lunatic revision of Freudian psychology". Several famous individuals became involved with Dianetics. Aldous Huxley received auditing from Hubbard; the poet Jean Toomer and the science fiction writers Theodore Sturgeon and A. E. van Vogt became trained Dianetics auditors. Van Vogt temporarily abandoned writing and became the head of the newly established Los Angeles branch of the Hubbard Dianetic Research Foundation. Other branches were established in New York, Washington, D.C., Chicago, and Honolulu. Although Dianetics was not cheap, a great many people were nonetheless willing to pay; van Vogt later recalled "doing little but tear open envelopes and pull out $500 checks from people who wanted to take an auditor's course". Financial controls were lax. Hubbard himself took large sums with no explanation of what he was doing with it. On one occasion, van Vogt saw Hubbard taking a lump sum of $56,000 () out of the Los Angeles Foundation's proceeds. One of Hubbard's employees, Helen O'Brien, commented that at the Elizabeth, N.J. branch of the Foundation, the books showed that "a month's income of $90,000 is listed, with only $20,000 accounted for". Hubbard played a very active role in the Dianetics boom, writing, lecturing and training auditors. Many of those who knew him spoke of being impressed by his personal charisma. Jack Horner, who became a Dianetics auditor in 1950, later said, "He was very impressive, dedicated and amusing. The man had tremendous charisma; you just wanted to hear every word he had to say and listen for any pearl of wisdom." Isaac Asimov recalled in his autobiography how, at a dinner party, he, Robert Heinlein, L. Sprague de Camp and their wives "all sat as quietly as pussycats and listened to Hubbard. He told tales with perfect aplomb and in complete paragraphs." As Atack comments, he was "a charismatic figure who compelled the devotion of those around him". Christopher Evans described the personal qualities that Hubbard brought to Dianetics and Scientology: Dianetics lost public credibility in August 1950 when a presentation by Hubbard before an audience of 6,000 at the Shrine Auditorium in Los Angeles failed disastrously. He introduced a Clear named Sonya Bianca and told the audience that as a result of undergoing Dianetic therapy she now possessed perfect recall. However, Gardner writes, "in the demonstration that followed, she failed to remember a single formula in physics (the subject in which she was majoring) or the color of Hubbard's tie when his back was turned. At this point, a large part of the audience got up and left." Hubbard's supporters soon began to have doubts about Dianetics. Winter became disillusioned, and in 1951, he wrote that he had never seen a single convincing Clear: "I have seen some individuals who are supposed to have been 'clear,' but their behavior does not conform to the definition of the state. Moreover, an individual supposed to have been 'clear' has undergone a relapse into conduct which suggests an incipient psychosis." He also deplored the Foundation's omission of any serious scientific research. Hubbard also faced other practitioners moving into leadership positions within the Dianetics community. It was structured as an open, public practice in which others were free to pursue their own lines of research and claim that their approaches to auditing produced better results than Hubbard's. The community rapidly splintered and its members mingled Hubbard's ideas with a wide variety of esoteric and occult practices. By late 1950, the Elizabeth, N.J. Foundation was in financial crisis and the Los Angeles Foundation was more than $200,000 in debt (). Winter and Art Ceppos, the publisher of Hubbard's book, resigned under acrimonious circumstances. Campbell also resigned, criticizing Hubbard for being impossible to work with, and blamed him for the disorganization and financial ruin of the Foundations. By the summer of 1951, the Elizabeth, N.J. Foundation and all of its branches had closed. The collapse of Hubbard's marriage to Sara created yet more problems. He had begun an affair with his 20-year-old public relations assistant in late 1950, while Sara started a relationship with Dianetics auditor Miles Hollister. Hubbard secretly denounced the couple to the FBI in March 1951, portraying them in a letter as communist infiltrators. According to Hubbard, Sara was "currently intimate with [communists] but evidently under coercion. Drug addiction set in fall 1950. Nothing of this known to me until a few weeks ago." Hollister was described as having a "sharp chin, broad forehead, rather Slavic". He was said to be the "center of most turbulence in our organization" and "active and dangerous". The FBI did not take Hubbard seriously: an agent annotated his correspondence with the comment, "Appears mental." Three weeks later, Hubbard and two Foundation staff seized Sara and his year-old daughter Alexis and forcibly took them to San Bernardino, California, where he attempted unsuccessfully to find a doctor to examine Sara and declare her insane. He let Sara go but took Alexis to Havana, Cuba. Sara filed a divorce suit on April 23, 1951, that accused him of marrying her bigamously and subjecting her to sleep deprivation, beatings, strangulation, kidnapping and exhortations to commit suicide. The case led to newspaper headlines such as "Ron Hubbard Insane, Says His Wife." Sara finally secured the return of her daughter in June 1951 by agreeing to a settlement with her husband in which she signed a statement, written by him, declaring: Dianetics appeared to be on the edge of total collapse. However, it was saved by Don Purcell, a millionaire businessman and Dianeticist who agreed to support a new Foundation in Wichita, Kansas. Their collaboration ended after less than a year when they fell out over the future direction of Dianetics. The Wichita Foundation became financially nonviable after a court ruled that it was liable for the unpaid debts of its defunct predecessor in Elizabeth, N.J. The ruling prompted Purcell and the other directors of the Wichita Foundation to file for voluntary bankruptcy in February 1952. Hubbard resigned immediately and accused Purcell of having been bribed by the American Medical Association to destroy Dianetics. Hubbard established a "Hubbard College" on the other side of town where he continued to promote Dianetics while fighting Purcell in the courts over the Foundation's intellectual property. Only six weeks after setting up the Hubbard College and marrying a staff member, 18-year-old Mary Sue Whipp, Hubbard closed it down and moved with his new bride to Phoenix, Arizona. He established a Hubbard Association of Scientologists International to promote his new "Science of Certainty"—Scientology. Scientology and Dianetics have been differentiated as follows: Dianetics is all about releasing the mind from the "distorting influence of engrams", and Scientology "is the study and handling of the spirit in relation to itself, universes and other life". The Church of Scientology attributes its genesis to Hubbard's discovery of "a new line of research"—"that man is most fundamentally a spiritual being (a thetan)". Non-Scientologist writers have suggested alternative motives: that he aimed "to reassert control over his creation", that he believed "he was about to lose control of Dianetics", or that he wanted to ensure "he would be able to stay in business even if the courts eventually awarded control of Dianetics and its valuable copyrights to ... the hated Don Purcell." Harlan Ellison has told a story of seeing Hubbard at a gathering of the Hydra Club in 1953 or 1954. Hubbard was complaining of not being able to make a living on what he was being paid as a science fiction writer. Ellison says that Lester del Rey told Hubbard that what he needed to do to get rich was start a religion. Hubbard expanded upon the basics of Dianetics to construct a spiritually oriented (though at this stage not religious) doctrine based on the concept that the true self of a person was a thetan—an immortal, omniscient and potentially omnipotent entity. Hubbard taught that thetans, having created the material universe, had forgotten their god-like powers and become trapped in physical bodies. Scientology aimed to "rehabilitate" each person's self (the thetan) to restore its original capacities and become once again an "Operating Thetan". Hubbard insisted humanity was imperiled by the forces of "aberration", which were the result of engrams carried by immortal thetans for billions of years. In 2012, Ohio State University professor Hugh Urban asserted that Hubbard had adopted many of his theories from the early to mid 20th century astral projection pioneer Sylvan Muldoon stating that Hubbard's description of exteriorizing the thetan is extremely similar if not identical to the descriptions of astral projection in occult literature popularized by Muldoon's widely read Phenomena of Astral Projection (1951) (co-written with Hereward Carrington) and that Muldoon's description of the astral body as being connected to the physical body by a long thin, elastic cord is virtually identical to the one described in Hubbard's "Excalibur" vision. Hubbard introduced a device called an E-meter that he presented as having, as Miller puts it, "an almost mystical power to reveal an individual's innermost thoughts". He promulgated Scientology through a series of lectures, bulletins and books such as "" ("a cold-blooded and factual account of your last sixty trillion years") and "Scientology: 8-8008" ("With this book, the ability to make one's body old or young at will, the ability to heal the ill without physical contact, the ability to cure the insane and the incapacitated, is set forth for the physician, the layman, the mathematician and the physicist.") Scientology was organized in a very different way from the decentralized Dianetics movement. The Hubbard Association of Scientologists (HAS) was the only official Scientology organization. Training procedures and doctrines were standardized and promoted through HAS publications, and administrators and auditors were not permitted to deviate from Hubbard's approach. Branches or "orgs" were organized as franchises, rather like a fast food restaurant chain. Each franchise holder was required to pay ten percent of income to Hubbard's central organization. They were expected to find new recruits, known as "raw meat", but were restricted to providing only basic services. Costlier higher-level auditing was only provided by Hubbard's central organization. Although this model would eventually be extremely successful, Scientology was a very small-scale movement at first. Hubbard started off with only a few dozen followers, generally dedicated Dianeticists; a seventy-hour series of lectures in Philadelphia in December 1952 was attended by just 38 people. Hubbard was joined in Phoenix by his 18-year-old son Nibs, who had been unable to settle down in high school. Nibs had decided to become a Scientologist, moved into his father's home and went on to become a Scientology staff member and "professor". Hubbard also traveled to the United Kingdom to establish his control over a Dianetics group in London. It was very much a shoestring operation; as Helen O'Brien later recalled, "there was an atmosphere of extreme poverty and undertones of a grim conspiracy over all. At 163 Holland Park Avenue was an ill-lit lecture room and a bare-boarded and poky office some eight by ten feet—mainly infested by long haired men and short haired and tatty women." On September 24, 1952, only a few weeks after arriving in London, Hubbard's wife Mary Sue gave birth to her first child, a daughter whom they named Diana Meredith de Wolfe Hubbard. In February 1953, Hubbard acquired a doctorate from the unaccredited degree mill called Sequoia University. As membership declined and finances grew tighter, Hubbard had reversed the hostility to religion he voiced in "Dianetics". A few weeks after becoming "Dr." Hubbard, he authored a letter outlining plans for transforming Scientology into a religion. In that letter, Hubbard proposed setting up a chain of "Spiritual Guidance Centers" charging customers $500 for twenty-four hours of auditing proposing that Scientology should be transformed into a religion: The letter's recipient, Helen O'Brien, resigned the following September. She criticized Hubbard for creating "a temperate zone voodoo, in its inelasticity, unexplainable procedures, and mindless group euphoria". The idea may not have been new; Hubbard has been quoted as telling a science fiction convention in 1948: "Writing for a penny a word is ridiculous. If a man really wants to make a million dollars, the best way would be to start his own religion." J. Gordon Melton notes, "There is no record of Hubbard having ever made this statement, though several of his science fiction colleagues have noted the broaching of the subject on one of their informal conversations." Despite objections, on December 18, 1953, Hubbard incorporated the Church of Scientology, Church of American Science and Church of Spiritual Engineering in Camden, New Jersey. Hubbard, his wife Mary Sue and his secretary John Galusha became the trustees of all three corporations. The reason for Scientology's religious transformation was explained by officials of the HAS: Scientology franchises became Churches of Scientology and some auditors began dressing as clergymen, complete with clerical collars. If they were arrested in the course of their activities, Hubbard advised, they should sue for massive damages for molesting "a Man of God going about his business". A few years later he told Scientologists: "If attacked on some vulnerable point by anyone or anything or any organization, always find or manufacture enough threat against them to cause them to sue for peace ... Don't ever defend, always attack." Any individual breaking away from Scientology and setting up his own group was to be shut down: The 1950s saw Scientology growing steadily. Hubbard finally achieved victory over Don Purcell in 1954 when the latter, worn out by constant litigation, handed the copyrights of Dianetics back to Hubbard. Most of the formerly independent Scientology and Dianetics groups were either driven out of business or were absorbed into Hubbard's organizations. Hubbard marketed Scientology through medical claims, such as attracting polio sufferers by presenting the Church of Scientology as a scientific research foundation investigating polio cases. One advertisement during this period stated: Scientology became a highly profitable enterprise for Hubbard. He implemented a scheme under which he was paid a percentage of the Church of Scientology's gross income and by 1957 he was being paid about $250,000 (). His family grew, too, with Mary Sue giving birth to three more children—Geoffrey Quentin McCaully on January 6, 1954; Mary Suzette Rochelle on February 13, 1955; and Arthur Ronald Conway on June 6, 1958. In the spring of 1959, he used his new-found wealth to purchase Saint Hill Manor, an 18th-century country house in Sussex, formerly owned by Sawai Man Singh II, the Maharaja of Jaipur. The house became Hubbard's permanent residence and an international training center for Scientologists. By the start of the 1960s, Hubbard was the leader of a worldwide movement with thousands of followers. A decade later, however, he had left Saint Hill Manor and moved aboard his own private fleet of ships as the Church of Scientology faced worldwide controversy. The Church of Scientology says that the problems of this period were due to "vicious, covert international attacks" by the United States government, "all of which were proven false and baseless, which were to last 27 years and finally culminated in the Government being sued for 750 million dollars for conspiracy." Behind the attacks, stated Hubbard, lay a vast conspiracy of "psychiatric front groups" secretly controlling governments: "Every single lie, false charge and attack on Scientology has been traced directly to this group's members. They have sought at great expense for nineteen years to crush and eradicate any new development in the field of the mind. They are actively preventing any effectiveness in this field." Hubbard believed that Scientology was being infiltrated by saboteurs and spies and introduced "security checking" to identify those he termed "potential trouble sources" and "suppressive persons". Members of the Church of Scientology were interrogated with the aid of E-meters and were asked questions such as "Have you ever practiced homosexuality?" and "Have you ever had unkind thoughts about L. Ron Hubbard?" For a time, Scientologists were even interrogated about crimes committed in past lives: "Have you ever destroyed a culture?" "Did you come to Earth for evil purposes?" "Have you ever zapped anyone?" He also sought to exert political influence, advising Scientologists to vote against Richard Nixon in the 1960 presidential election and establishing a Department of Government Affairs "to bring government and hostile philosophies or societies into a state of complete compliance with the goals of Scientology". This, he said, "is done by high-level ability to control and in its absence by a low-level ability to overwhelm. Introvert such agencies. Control such agencies." The U.S. Government was already well aware of Hubbard's activities. The FBI had a lengthy file on him, including a 1951 interview with an agent who considered him a "mental case". Police forces in a number of jurisdictions began exchanging information about Scientology through the auspices of Interpol, which eventually led to prosecutions. In 1958, the U.S. Internal Revenue Service withdrew the Washington, D.C. Church of Scientology's tax exemption after it found that Hubbard and his family were profiting unreasonably from Scientology's ostensibly non-profit income. The Food and Drug Administration took action against Scientology's medical claims, seizing thousands of pills being marketed as "radiation cures" as well as publications and E-meters. The Church of Scientology was required to label them as being "ineffective in the diagnosis or treatment of disease". Following the FDA's actions, Scientology attracted increasingly unfavorable publicity across the English-speaking world. It faced particularly hostile scrutiny in Victoria, Australia, where it was accused of brainwashing, blackmail, extortion and damaging the mental health of its members. The Victorian state government established a Board of Inquiry into Scientology in November 1963. Its report, published in October 1965, condemned every aspect of Scientology and Hubbard himself. He was described as being of doubtful sanity, having a persecution complex and displaying strong indications of paranoid schizophrenia with delusions of grandeur. His writings were characterized as nonsensical, abounding in "self-glorification and grandiosity, replete with histrionics and hysterical, incontinent outbursts". Sociologist Roy Wallis comments that the report drastically changed public perceptions of Scientology: The report led to Scientology being banned in Victoria, Western Australia and South Australia, and led to more negative publicity around the world. Newspapers and politicians in the UK pressed the British government for action against Scientology. In April 1966, hoping to form a remote "safe haven" for Scientology, Hubbard traveled to the southern African country Rhodesia (today Zimbabwe) and looked into setting up a base there at a hotel on Lake Kariba. Despite his attempts to curry favour with the local government—he personally delivered champagne to Prime Minister Ian Smith's house, but Smith refused to see him—Rhodesia promptly refused to renew Hubbard's visa, compelling him to leave the country. In July 1968, the British Minister of Health, Kenneth Robinson, announced that foreign Scientologists would no longer be permitted to enter the UK and Hubbard himself was excluded from the country as an "undesirable alien". Further inquiries were launched in Canada, New Zealand and South Africa. Hubbard took three major new initiatives in the face of these challenges. "Ethics Technology" was introduced to tighten internal discipline within Scientology. It required Scientologists to "disconnect" from any organization or individual—including family members—deemed to be disruptive or "suppressive". According to church-operated websites, "A person who disconnects is simply exercising his right to communicate or not to communicate with a particular person." Hubbard stated: "Communication, however, is a two-way flow. If one has the right to communicate, then one must also have the right to not receive communication from another. It is this latter corollary of the right to communicate that gives us our right to privacy." Scientologists were also required to write "Knowledge Reports" on each other, reporting transgressions or misapplications of Scientology methods. Hubbard promulgated a long list of punishable "Misdemeanors", "Crimes", and "High Crimes". The "Fair Game" policy was introduced, which was applicable to anyone deemed an "enemy" of Scientology: "May be deprived of property or injured by any means by any Scientologist without any discipline of the Scientologist. May be tricked, sued or lied to or destroyed." At the start of March 1966, Hubbard created the Guardian's Office (GO), a new agency within the Church of Scientology that was headed by his wife Mary Sue. It dealt with Scientology's external affairs, including public relations, legal actions and the gathering of intelligence on perceived threats. As Scientology faced increasingly negative media attention, the GO retaliated with hundreds of writs for libel and slander; it issued more than forty on a single day. Hubbard ordered his staff to find "lurid, blood sex crime actual evidence on [Scientology's] attackers". Finally, at the end of 1966, Hubbard acquired his own fleet of ships. He established the "Hubbard Explorational Company Ltd" which purchased three ships—the "Enchanter", a forty-ton schooner, the "Avon River", an old trawler, and the "Royal Scotman" , a former Irish Sea cattle ferry that he made his home and flagship. The ships were crewed by the Sea Organization or Sea Org, a group of Scientologist volunteers, with the support of a couple of professional seamen. After Hubbard created the Sea Org "fleet" in early 1967 it began an eight-year voyage, sailing from port to port in the Mediterranean Sea and eastern North Atlantic. The fleet traveled as far as Corfu in the eastern Mediterranean and Dakar and the Azores in the Atlantic, but rarely stayed anywhere for longer than six weeks. Ken Urquhart, Hubbard's personal assistant at the time, later recalled: When Hubbard established the Sea Org he publicly declared that he had relinquished his management responsibilities. According to Miller, this was not true. He received daily telex messages from Scientology organizations around the world reporting their statistics and income. The Church of Scientology sent him $15,000 () a week and millions of dollars were transferred to his bank accounts in Switzerland and Liechtenstein. Couriers arrived regularly, conveying luxury food for Hubbard and his family or cash that had been smuggled from England to avoid currency export restrictions. Along the way, Hubbard sought to establish a safe haven in "a friendly little country where Scientology would be allowed to prosper", as Miller puts it. The fleet stayed at Corfu for several months in 1968–1969. Hubbard renamed the ships after Greek gods—the "Royal Scotman" was rechristened "Apollo"—and he praised the recently established military dictatorship. The Sea Org was represented as "Professor Hubbard's Philosophy School" in a telegram to the Greek government. In March 1969, however, Hubbard and his ships were ordered to leave. In mid-1972, Hubbard tried again in Morocco, establishing contacts with the country's secret police and training senior policemen and intelligence agents in techniques for detecting subversives. The program ended in failure when it became caught up in internal Moroccan politics, and Hubbard left the country hastily in December 1972. At the same time, Hubbard was still developing Scientology's doctrines. A Scientology biography states that "free of organizational duties and aided by the first Sea Org members, L. Ron Hubbard now had the time and facilities to confirm in the physical universe some of the events and places he had encountered in his journeys down the track of time." In 1965, he designated several existing Scientology courses as confidential, repackaging them as the first of the esoteric "OT levels". Two years later he announced the release of OT3, the "Wall of Fire", revealing the secrets of an immense disaster that had occurred "on this planet, and on the other seventy-five planets which form this Confederacy, seventy-five million years ago". Scientologists were required to undertake the first two OT levels before learning how Xenu, the leader of the Galactic Confederacy, had shipped billions of people to Earth and blown them up with hydrogen bombs, following which their traumatized spirits were stuck together at "implant stations", brainwashed with false memories and eventually became contained within human beings. The discovery of OT3 was said to have taken a major physical toll on Hubbard, who announced that he had broken a knee, an arm, and his back during the course of his research. A year later, in 1968, he unveiled OT levels 4 to 6 and began delivering OT training courses to Scientologists aboard the "Royal Scotman". Scientologists around the world were presented with a glamorous picture of life in the Sea Org and many applied to join Hubbard aboard the fleet. What they found was rather different from the image. Most of those joining had no nautical experience at all. Mechanical difficulties and blunders by the crews led to a series of embarrassing incidents and near-disasters. Following one incident in which the rudder of the "Royal Scotman" was damaged during a storm, Hubbard ordered the ship's entire crew to be reduced to a "condition of liability" and wear gray rags tied to their arms. The ship itself was treated the same way, with dirty tarpaulins tied around its funnel to symbolize its lower status. According to those aboard, conditions were appalling; the crew was worked to the point of exhaustion, given meagre rations and forbidden to wash or change their clothes for several weeks. Hubbard maintained a harsh disciplinary regime aboard the fleet, punishing mistakes by confining people in the "Royal Scotman" bilge tanks without toilet facilities and with food provided in buckets. At other times erring crew members were thrown overboard with Hubbard looking on and, occasionally, filming. David Mayo, a Sea Org member at the time, later recalled: From about 1970, Hubbard was attended aboard ship by the children of Sea Org members, organized as the Commodore's Messenger Organization (CMO). They were mainly young girls dressed in hot pants and halter tops, who were responsible for running errands for Hubbard such as lighting his cigarettes, dressing him or relaying his verbal commands to other members of the crew. In addition to his wife Mary Sue, he was accompanied by all four of his children by her, though not his first son Nibs, who had defected from Scientology in late 1959. The younger Hubbards were all members of the Sea Org and shared its rigors, though Quentin Hubbard reportedly found it difficult to adjust and attempted suicide in mid-1974. During the 1970s, Hubbard faced an increasing number of legal threats. French prosecutors charged him and the French Church of Scientology with fraud and customs violations in 1972. He was advised that he was at risk of being extradited to France. Hubbard left the Sea Org fleet temporarily at the end of 1972, living incognito in Queens, New York, until he returned to his flagship in September 1973 when the threat of extradition had abated. Scientology sources say that he carried out "a sociological study in and around New York City". Hubbard's health deteriorated significantly during this period. A chain-smoker, he also suffered from bursitis and excessive weight, and had a prominent growth on his forehead. He suffered serious injuries in a motorcycle accident in 1973 and had a heart attack in 1975 that required him to take anticoagulant drugs for the next year. In September 1978, Hubbard had a pulmonary embolism, falling into a coma, but recovered. He remained active in managing and developing Scientology, establishing the controversial Rehabilitation Project Force in 1974 and issuing policy and doctrinal bulletins. However, the Sea Org's voyages were coming to an end. The "Apollo" was banned from several Spanish ports and was expelled from Curaçao in October 1975. The Sea Org came to be suspected of being a CIA operation, leading to a riot in Funchal, Madeira, when the "Apollo" docked there. At the time, "The Apollo Stars", a musical group founded by Hubbard and made up entirely of ship-bound members of the Sea Org, was offering free on-pier concerts in an attempt to promote Scientology, and the riot occurred at one of these events. Hubbard decided to relocate back to the United States to establish a "land base" for the Sea Org in Florida. The Church of Scientology attributes this decision to the activities on the "Apollo" having "outgrow[n] the ship's capacity". In October 1975, Hubbard moved into a hotel suite in Daytona Beach. The Fort Harrison Hotel in Clearwater, Florida, was secretly acquired as the location for the "land base". On December 5, 1975, Hubbard and his wife Mary Sue moved into a condominium complex in nearby Dunedin. Their presence was meant to be a closely guarded secret but was accidentally compromised the following month. Hubbard immediately left Dunedin and moved to Georgetown, Washington, D.C., accompanied by a handful of aides and messengers, but not his wife. Six months later, following another security alert in July 1976, Hubbard moved to another safe house in Culver City, California. He lived there for only about three months, relocating in October to the more private confines of the Olive Tree Ranch near La Quinta. His second son Quentin committed suicide a few weeks later in Las Vegas. Throughout this period, Hubbard was heavily involved in directing the activities of the Guardian's Office (GO), the legal bureau/intelligence agency that he had established in 1966. He believed that Scientology was being attacked by an international Nazi conspiracy, which he termed the "Tenyaka Memorial", through a network of drug companies, banks and psychiatrists in a bid to take over the world. In 1973, he instigated the "Snow White Program" and directed the GO to remove negative reports about Scientology from government files and track down their sources. The GO was ordered to "get all false and secret files on Scientology, LRH  ... that cannot be obtained legally, by all possible lines of approach ... i.e., job penetration, janitor penetration, suitable guises utilizing covers." His involvement in the GO's operations was concealed through the use of codenames. The GO carried out covert campaigns on his behalf such as Operation Bulldozer Leak, intended "to effectively spread the rumor that will lead Government, media, and individual [Suppressive Persons] to conclude that LRH has no control of the C of S and no legal liability for Church activity". He was kept informed of GO operations, such as the theft of medical records from a hospital, harassment of psychiatrists and infiltrations of organizations that had been critical of Scientology at various times, such as the Better Business Bureau, the American Medical Association, and American Psychiatric Association. Members of the GO infiltrated and burglarized numerous government organizations, including the U.S. Department of Justice and the Internal Revenue Service. After two GO agents were caught in the Washington, D.C. headquarters of the IRS, the FBI carried out simultaneous raids on GO offices in Los Angeles and Washington, D.C. on July 7, 1977. They retrieved wiretap equipment, burglary tools and some 90,000 pages of incriminating documents. Hubbard was not prosecuted, though he was labeled an "unindicted co-conspirator" by government prosecutors. His wife Mary Sue was indicted and subsequently convicted of conspiracy. She was sent to a federal prison along with ten other Scientologists. Hubbard's troubles increased in February 1978 when a French court convicted him in absentia for obtaining money under false pretenses. He was sentenced to four years in prison and a 35,000FF ($7,000) fine, . He went into hiding in April 1979, moving to an apartment in Hemet, California, where his only contact with the outside world was via ten trusted messengers. He cut contact with everyone else, even his wife, whom he saw for the last time in August 1979. Hubbard faced a possible indictment for his role in Operation Freakout, the GO's campaign against New York journalist Paulette Cooper, and in February 1980 he disappeared into deep cover in the company of two trusted messengers, Pat and Annie Broeker. For the first few years of the 1980s, Hubbard and the Broekers lived on the move, touring the Pacific Northwest in a recreational vehicle and living for a while in apartments in Newport Beach and Los Angeles. Hubbard used his time in hiding to write his first new works of science fiction for nearly thirty years—"Battlefield Earth" (1982) and "Mission Earth", a ten-volume series published between 1985 and 1987. They received mixed responses; as writer Jeff Walker puts it, they were "treated derisively by most critics but greatly admired by followers". Hubbard also wrote and composed music for three of his albums, which were produced by the Church of Scientology. The book soundtrack "Space Jazz" was released in 1982. "Mission Earth" and "The Road to Freedom" were released posthumously in 1986. In Hubbard's absence, members of the Sea Org staged a takeover of the Church of Scientology and purged many veteran Scientologists. A young messenger, David Miscavige, became Scientology's "de facto" leader. Mary Sue Hubbard was forced to resign her position and her daughter Suzette became Miscavige's personal maid. For the last two years of his life, Hubbard lived in a luxury Blue Bird motorhome on Whispering Winds, a 160-acre ranch near Creston, California. He remained in deep hiding while controversy raged in the outside world about whether he was still alive and, if so, where. He spent his time "writing and researching", according to a spokesperson, and pursued photography and music, overseeing construction work and checking on his animals. He repeatedly redesigned the property, spending millions of dollars remodeling the ranch house—which went virtually uninhabited—and building a quarter-mile horse-racing track with an observation tower, which reportedly was never used. He was still closely involved in managing the Church of Scientology via secretly delivered orders and continued to receive large amounts of money, of which "Forbes" magazine estimated "at least $200 million [was] gathered in Hubbard's name through 1982." In September 1985, the IRS notified the Church that it was considering indicting Hubbard for tax fraud. Hubbard suffered further ill-health, including chronic pancreatitis, during his residence at Whispering Winds. He suffered a stroke on January 17, 1986, and died a week later. His body was cremated and the ashes were scattered at sea. Scientology leaders announced that his body had become an impediment to his work and that he had decided to "drop his body" to continue his research on another planet, having "learned how to do it without a body". Hubbard was survived by his wife Mary Sue and all of his children except his second son Quentin. His will provided a trust fund to support Mary Sue; her children Arthur, Diana and Suzette; and Katherine, the daughter of his first wife Polly. He disinherited two of his other children. L. Ron Hubbard, Jr. had become estranged, changed his name to "Ronald DeWolf" and, in 1982, sued unsuccessfully for control of his father's estate. Alexis Valerie, Hubbard's daughter by his second wife Sara, had attempted to contact her father in 1971. She was rebuffed with the implied claim that her real father was Jack Parsons rather than Hubbard, and that her mother had been a Nazi spy during the war. Both later accepted settlements when litigation was threatened. In 2001, Diana and Suzette were reported to still be Church members, while Arthur had left and become an artist. Hubbard's great-grandson, Jamie DeWolf, is a noted slam poet. The copyrights of his works and much of his estate and wealth were willed to the Church of Scientology. In a bulletin dated May 5, 1980, Hubbard told his followers to preserve his teachings until an eventual reincarnation when he would return "not as a religious leader but as a political one". The Church of Spiritual Technology (CST), a sister organization of the Church of Scientology, has engraved Hubbard's entire corpus of Scientology and Dianetics texts on steel tablets stored in titanium containers. They are buried at the Trementina Base in a vault under a mountain near Trementina, New Mexico, on top of which the CST's logo has been bulldozed on such a gigantic scale that it is visible from space. Hubbard is the Guinness World Record holder for the most published author, with 1,084 works, most translated book (70 languages for "The Way to Happiness") and most audiobooks (185 as of April 2009). According to Galaxy Press, Hubbard's "Battlefield Earth" has sold over 6 million copies and "Mission Earth" a further 7 million, with each of its ten volumes becoming "New York Times" bestsellers on their release; however, the "Los Angeles Times" reported in 1990 that Hubbard's followers had been buying large numbers of the books and re-issuing them to stores, so as to boost sales figures. Opinions are divided about his literary legacy. Scientologists have written of their desire to "make Ron the most acclaimed and widely known author of all time". The sociologist William Sims Bainbridge writes that even at his peak in the late 1930s Hubbard was regarded by readers of "Astounding Science Fiction" as merely "a passable, familiar author but not one of the best", while by the late 1970s "the [science fiction] subculture wishes it could forget him" and fans gave him a worse rating than any other of the "Golden Age" writers. Posthumously, the Los Angeles City Council named a part of the street close to the headquarters of Scientology in 1996, as recognition of Hubbard. In 2011, the West Valley City Council declared March 13 as L. Ron Hubbard Centennial Day. In April 2016, the New Jersey State Board of Education approved Hubbard's birthday as one of its religious holidays. In 2004, eighteen years after Hubbard's death, the Church claimed eight million followers worldwide. According to religious scholar J. Gordon Melton, this is an overestimate, counting as Scientologists people who had merely bought a book. The City University of New York's American Religious Identification Survey found that by 2009 only 25,000 Americans identified as Scientologists. Hubbard's presence still pervades Scientology. Every Church of Scientology maintains an office reserved for Hubbard, with a desk, chair and writing equipment, ready to be used. Lonnie D. Kliever notes that Hubbard was "the only source of the religion, and he has no successor". Hubbard is referred to simply as "Source" within Scientology and the theological acceptability of any Scientology-related activity is determined by how closely it adheres to Hubbard's doctrines. Hubbard's name and signature are official trademarks of the Religious Technology Center, established in 1982 to control and oversee the use of Hubbard's works and Scientology's trademarks and copyrights. The RTC is the central organization within Scientology's complex corporate hierarchy and has put much effort into re-checking the accuracy of all Scientology publications to "ensur[e] the availability of the pure unadulterated writings of Mr. Hubbard to the coming generations". The Danish historian of religions Mikael Rothstein describes Scientology as "a movement focused on the figure of Hubbard". He comments: "The fact that [Hubbard's] life is mythologized is as obvious as in the cases of Jesus, Muhammad or Siddartha Gotama. This is how religion works. Scientology, however, rejects this analysis altogether, and goes to great lengths to defend every detail of Hubbard's amazing and fantastic life as plain historical fact." Hubbard is presented as "the master of a multitude of disciplines" who performed extraordinary feats as a photographer, composer, scientist, therapist, explorer, navigator, philosopher, poet, artist, humanitarian, adventurer, soldier, scout, musician and many other fields of endeavor. The Church of Scientology portrays Hubbard's life and work as having proceeded seamlessly, "as if they were a continuous set of predetermined events and discoveries that unfolded through his lifelong research" even up to and beyond his death. According to Rothstein's assessment of Hubbard's legacy, Scientology consciously aims to transfer the charismatic authority of Hubbard to institutionalize his authority over the organization, even after his death. Hubbard is presented as a virtually superhuman religious ideal just as Scientology itself is presented as the most important development in human history. As Rothstein puts it, "reverence for Scientology's scripture is reverence for Hubbard, the man who in the Scientological perspective single-handedly brought salvation to all human beings." David G. Bromley of the University of Virginia comments that the real Hubbard has been transformed into a "prophetic persona", "LRH", which acts as the basis for his prophetic authority within Scientology and transcends his biographical history. According to Dorthe Refslund Christensen, Hubbard's hagiography directly compares him with Buddha. Hubbard is viewed as having made Eastern traditions more accessible by approaching them with a scientific attitude. "Hubbard is seen as the ultimate-cross-cultural savior; he is thought to be able to release man from his miserable condition because he had the necessary background, and especially the right attitude." Hubbard, although increasingly deified after his death, is the model Operating Thetan to Scientologists and their founder, and not God. Hubbard then is the "Source", "inviting others to follow his path in ways comparable to a Bodhisattva figure" according to religious scholar Donald A. Westbrook. Scientologists refer to L. Ron Hubbard as "Ron", referring to him as a personal friend. In the late 1970s two men began to assemble a picture of Hubbard's life. Michael Linn Shannon, a resident of Portland, Oregon, became interested in Hubbard's life story after an encounter with a Scientology recruiter. Over the next four years he collected previously undisclosed records and documents. He intended to write an exposé of Hubbard and sent a copy of his findings and key records to a number of contacts but was unable to find a publisher. Shannon's findings were acquired by Gerry Armstrong, a Scientologist who had been appointed Hubbard's official archivist. He had been given the job of assembling documents relating to Hubbard's life for the purpose of helping Omar V. Garrison, a non-Scientologist who had written two books sympathetic to Scientology, to write an official biography. However, the documents that he uncovered convinced both Armstrong and Garrison that Hubbard had systematically misrepresented his life. Garrison refused to write a "puff piece" and declared that he would not "repeat all the falsehoods they [the Church of Scientology] had perpetuated over the years". He wrote a "warts and all" biography while Armstrong quit Scientology, taking five boxes of papers with him. The Church of Scientology and Mary Sue Hubbard sued for the return of the documents while settling out of court with Garrison, requiring him to turn over the nearly completed manuscript of the biography. In October 1984 Judge Paul G. Breckenridge ruled in Armstrong's favor, saying: In November 1987, the British journalist and writer Russell Miller published "Bare-faced Messiah", the first full-length biography of L. Ron Hubbard. He drew on Armstrong's papers, official records and interviews with those who had known Hubbard including ex-Scientologists and family members. The book was well-received by reviewers but the Church of Scientology sought unsuccessfully to prohibit its publication on the grounds of copyright infringement. Other critical biographical accounts are found in Bent Corydon's "L. Ron Hubbard, Messiah or Madman?" (1987) and Jon Atack's "A Piece of Blue Sky" (1990). Hagiographical accounts published by the Church of Scientology describe Hubbard as "a child prodigy of sorts" who rode a horse before he could walk and was able to read and write by the age of four. A Scientology profile says that he was brought up on his grandfather's "large cattle ranch in Montana" where he spent his days "riding, breaking broncos, hunting coyote and taking his first steps as an explorer". His grandfather is described as a "wealthy Western cattleman" from whom Hubbard "inherited his fortune and family interests in America, Southern Africa, etc." Scientology claims that Hubbard became a "blood brother" of the Native American Blackfeet tribe at the age of six through his friendship with a Blackfeet medicine man. However, contemporary records show that his grandfather, Lafayette Waterbury, was a veterinarian, not a rancher, and was not wealthy. Hubbard was actually raised in a townhouse in the center of Helena. According to his aunt, his family did not own a ranch but did own one cow and four or five horses on a few acres of land outside the city. Hubbard lived over a hundred miles from the Blackfeet reservation. While some sources support Scientology's claim of Hubbard's blood brotherhood, other sources say that the tribe did not practice blood brotherhood and no evidence has been found that he had ever been a Blackfeet blood brother. According to Scientology biographies, during a journey to Washington, D.C. in 1923 Hubbard learned of Freudian psychology from Commander Joseph "Snake" Thompson, a U.S. Navy psychoanalyst and medic. Scientology biographies describe this encounter as giving Hubbard training in a particular scientific approach to the mind, which he found unsatisfying. In his diary, Hubbard claimed he was the youngest Eagle Scout in the U.S. Scientology texts present Hubbard's travels in Asia as a time when he was intensely curious for answers to human suffering and explored ancient Eastern philosophies for answers, but found them lacking. He is described as traveling to China "at a time when few Westerners could enter" and according to Scientology, spent his time questioning Buddhist lamas and meeting old Chinese magicians. According to church materials, his travels were funded by his "wealthy grandfather". Scientology accounts say that Hubbard "made his way deep into Manchuria's Western Hills and beyond — to break bread with Mongolian bandits, share campfires with Siberian shamans and befriend the last in the line of magicians from the court of Kublai Khan". However, Hubbard did not record these events in his diary. He remained unimpressed with China and the Chinese, writing: "A Chinaman can not live up to a thing, he always drags it down." He characterized the sights of Beijing as "rubberneck stations" for tourists and described the palaces of the Forbidden City as "very trashy-looking" and "not worth mentioning". He was impressed by the Great Wall of China near Beijing, but concluded of the Chinese: "They smell of all the baths they didn't take. The trouble with China is, there are too many chinks here." Despite not graduating from George Washington, Hubbard claimed "to be not only a graduate engineer, but 'a member of the first United States course in formal education in what is called today nuclear physics.'" However, a Church of Scientology biography describes him as "never noted for being in class" and says that he "thoroughly detest[ed] his subjects". He earned poor grades, was placed on probation in September 1931 and dropped out altogether in the fall of 1932. Scientology accounts say that he "studied nuclear physics at George Washington University in Washington, D.C., before he started his studies about the mind, spirit and life" and Hubbard himself stated that he "set out to find out from nuclear physics a knowledge of the physical universe, something entirely lacking in Asian philosophy". His university records indicate that his exposure to "nuclear physics" consisted of one class in "atomic and molecular phenomena" for which he earned an "F" grade. Scientologists claim he was more interested in extracurricular activities, particularly writing and flying. According to church materials, "he earned his wings as a pioneering barnstormer at the dawn of American aviation" and was "recognized as one of the country's most outstanding pilots. With virtually no training time, he takes up powered flight and barnstorms throughout the Midwest." His airman certificate, however, records that he qualified to fly only gliders rather than powered aircraft and gave up his certificate when he could not afford the renewal fee. After leaving university Hubbard traveled to Puerto Rico on what the Church of Scientology calls the "Puerto Rican Mineralogical Expedition". Scientologists claim he "made the first complete mineralogical survey of Puerto Rico" as a means of "augmenting his [father's] pay with a mining venture", during which he "sluiced inland rivers and crisscrossed the island in search of elusive gold" as well as carrying out "much ethnological work amongst the interior villages and native hillsmen". Hubbard's unofficial biographer Russell Miller writes that neither the United States Geological Survey nor the Puerto Rican Department of Natural Resources have any record of any such expedition. According to the Church of Scientology, Hubbard was "called to Hollywood" to work on film scripts in the mid-1930s, although Scientology accounts differ as to exactly when this was (whether 1935, 1936 or 1937). The Church of Scientology claims he also worked on the Columbia serials "The Mysterious Pilot" (1937), "The Great Adventures of Wild Bill Hickok" (1938) and "The Spider Returns" (1941), though his name does not appear on the credits. Hubbard also claimed to have written "Dive Bomber" (1941), Cecil B. DeMille's "The Plainsman" (1936) and John Ford's "Stagecoach" (1939). Scientology accounts of the expedition to Alaska describe "Hubbard's re-charting of an especially treacherous Inside Passage, and his ethnological study of indigenous Aleuts and Haidas" and tell of how "along the way, he not only roped a Kodiak Bear, but braved seventy-mile-an-hour winds and commensurate seas off the Aleutian Islands." They are divided about how far Hubbard's expedition actually traveled, whether or . The Church disputes the official record of Hubbard's naval career. It asserts that the records are incomplete and perhaps falsified "to conceal Hubbard's secret activities as an intelligence officer". In 1990 the Church provided the "Los Angeles Times" with a document that was said to be a copy of Hubbard's official record of service. The U.S. Navy told the "Times" that "its contents are not supported by Hubbard's personnel record." "The New Yorker" reported in February 2011 that the Scientology document was considered by federal archivists to be a forgery. The Church of Scientology presents him as a "much-decorated war hero who commanded a corvette and during hostilities was crippled and wounded". Scientology publications say he served as a "Commodore of Corvette squadrons" in "all five theaters of World War II" and was awarded "twenty-one medals and palms" for his service. He was "severely wounded and was taken crippled and blinded" to a military hospital, where he "worked his way back to fitness, strength and full perception in less than two years, using only what he knew and could determine about Man and his relationship to the universe". He said that he had seen combat repeatedly, telling A. E. van Vogt that he had once sailed his ship "right into the harbor of a Japanese occupied island in the Dutch East Indies. His attitude was that if you took your flag down the Japanese would not know one boat from another, so he tied up at the dock, went ashore and wandered around by himself for three days." Hubbard's war service has great significance in the history and mythology of the Church of Scientology, as he is said to have cured himself through techniques that would later underpin Scientology and Dianetics. According to Moulton, Hubbard told him that he had been machine-gunned in the back near the Dutch East Indies. Hubbard asserted that his eyes had been damaged as well, either "by the flash of a large-caliber gun" or when he had "a bomb go off in my face". Scientology texts say that he returned from the war "[b]linded with injured optic nerves, and lame with physical injuries to hip and back" and was twice pronounced dead. Hubbard's official Navy service records indicate that "his military performance was, at times, substandard" and he received only four campaign medals rather than the claimed twenty-one. He was never recorded as being injured or wounded in combat and never received a Purple Heart. The Church of Scientology says that Hubbard's key breakthrough in the development of Dianetics was made at Oak Knoll Naval Hospital in Oakland, California. According to the Church, Scientology accounts do not mention Hubbard's involvement in occultism. He is instead described as "continu[ing] to write to help support his research" during this period into "the development of a means to better the condition of man". The Church of Scientology has nonetheless acknowledged Hubbard's involvement with the OTO; a 1969 statement, written by Hubbard himself, said: The Church of Scientology says Hubbard was "sent in" by his fellow science fiction author Robert Heinlein, "who was running off-book intelligence operations for naval intelligence at the time". However, Heinlein's authorized biographer has said that he looked into the matter at the suggestion of Scientologists but found nothing to corroborate claims that Heinlein had been involved, and his biography of Heinlein makes no mention of the matter. The Church of Scientology says Hubbard quit the Navy because it "attempted to monopolize all his researches and force him to work on a project 'to make man more suggestible' and when he was unwilling, tried to blackmail him by ordering him back to active duty to perform this function. Having many friends he was able to instantly resign from the Navy and escape this trap." The Navy said in a statement in 1980: "There is no evidence on record of an attempt to recall him to active duty." Following Hubbard's death, Bridge Publications published several stand-alone biographical accounts of his life. Marco Frenschkowski notes that "non-Scientologist readers immediately recognize some parts of Hubbard's life are here systematically left out: no information whatsoever is given about his private life (his marriages, divorces, children), his legal affairs and so on." The Church maintains an extensive website presenting the official version of Hubbard's life. It also owns a number of properties dedicated to Hubbard including the Los Angeles-based L. Ron Hubbard Life Exhibition (a presentation of Hubbard's life), the Author Services Center (a presentation of Hubbard's writings), and the L. Ron Hubbard House in Washington, D.C. In late 2012, Bridge published a comprehensive official biography of Hubbard, titled "The L. Ron Hubbard Series: A Biographical Encyclopedia", written primarily by Dan Sherman, the official Hubbard biographer at the time. This most recent official Church of Scientology biography of Hubbard is a 17 volume series, with each volume focusing on a different aspect of Hubbard's life, including his music, photography, geographic exploration, humanitarian work, and nautical career. It is advertised as a "Biographic Encyclopedia" and is primarily authored by the official biographer, Dan Sherman. During his lifetime, a number of brief biographical sketches were also published in his Scientology books. The Church of Scientology issued "the only authorized LRH Biography" in October 1977 (it has since been followed by the Sherman "Biographic Encyclopedia"). His life was illustrated in print in "What Is Scientology?", a glossy publication published in 1978 with paintings of Hubbard's life contributed by his son Arthur. According to the Church of Scientology, Hubbard produced some 65 million words on Dianetics and Scientology, contained in about 500,000 pages of written material, 3,000 recorded lectures and 100 films. His works of fiction included some 500 novels and short stories.
https://en.wikipedia.org/wiki?curid=17862
Luddite The Luddites were a secret oath-based organization of English textile workers in the 19th century, a radical faction which destroyed textile machinery as a form of protest. The group was protesting against manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. Luddites feared that the time spent learning the skills of their craft would go to waste, as machines would replace their role in the industry. Over time, the term has come to mean one opposed to industrialisation, automation, computerisation, or new technologies in general. The Luddite movement began in Nottingham in England and culminated in a region-wide rebellion that lasted from 1811 to 1816. Mill and factory owners took to shooting protesters and eventually the movement was suppressed with legal and military force. The name Luddite () is of uncertain origin. The movement was said to be named after Ned Ludd, an apprentice who allegedly smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. Ned Ludd, however, was completely fictional and used as a way to shock and provoke the government. The name developed into the imaginary General Ludd or King Ludd, who was reputed to live in Sherwood Forest like Robin Hood. The lower classes of the 18th century were not openly disloyal to the king or government, generally speaking, and violent action was rare because punishments were harsh. The majority of individuals were primarily concerned with meeting their own daily needs. Working conditions were harsh in the English textile mills at the time but efficient enough to threaten the livelihoods of skilled artisans. The new inventions produced textiles faster and cheaper because they were operated by less-skilled, low-wage labourers, and the Luddite goal was to gain a better bargaining position with their employers. Kevin Binfield asserts that organized action by stockingers had occurred at various times since 1675, and he suggests that the movements of the early 19th century should be viewed in the context of the hardships suffered by the working class during the Napoleonic Wars, rather than as an absolute aversion to machinery. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756. Skilled artisans in the cloth, building, shipbuilding, printing, and cutlery trades organized friendly societies to peacefully insure themselves against unemployment, sickness, and intrusion of foreign labour into their trades, as was common among guilds. Malcolm L. Thomis argued in his 1970 history "The Luddites" that machine-breaking was one of a very few tactics that workers could use to increase pressure on employers, to undermine lower-paid competing workers, and to create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centering on breaking threshing machines. Handloom weavers burned mills and pieces of factory machinery. Textile workers destroyed industrial equipment during the late 18th century, prompting acts such as the Protection of Stocking Frames, etc. Act 1788. The Luddite movement emerged during the harsh economic climate of the Napoleonic Wars, which saw a rise of difficult working conditions in the new textile factories. Luddites objected primarily to the rising popularity of automated textile equipment, threatening the jobs and livelihoods of skilled workers as this technology allowed them to be replaced by cheaper and less skilled workers. The movement began in Arnold, Nottingham on 11 March 1811 and spread rapidly throughout England over the following two years. The Luddites met at night on the moors surrounding industrial towns to practice drills and manoeuvres. Their main areas of operation began in Nottinghamshire in November 1811, followed by the West Riding of Yorkshire in early 1812 then Lancashire by March 1813. They smashed stocking frames and cropping frames among others. There does not seem to have been any political motivation behind the Luddite riots and there was no national organization; the men were merely attacking what they saw as the reason for the decline in their livelihoods. Luddites battled the British Army at Burton's Mill in Middleton and at Westhoughton Mill, both in Lancashire. The Luddites and their supporters anonymously sent death threats to, and possibly attacked, magistrates and food merchants. Activists smashed Heathcote's lacemaking machine in Loughborough in 1816. He and other industrialists had secret chambers constructed in their buildings that could be used as hiding places during an attack. In 1817, an unemployed Nottingham stockinger and probably ex-Luddite, named Jeremiah Brandreth led the Pentrich Rising. While this was a general uprising unrelated to machinery, it can be viewed as the last major heroic Luddite act. The British Army clashed with the Luddites on several occasions. At one time there were more British soldiers fighting the Luddites than there were fighting Napoleon on the Iberian Peninsula. Three Luddites, led by George Mellor, ambushed and assassinated mill owner William Horsfall of Ottiwells Mill in Marsden, West Yorkshire at Crosland Moor in Huddersfield. Horsfall had remarked that he would "Ride up to his saddle in Luddite blood". Mellor fired the fatal shot to Horsfall's groin, and all three men were arrested. Lord Byron denounced what he considered to be the plight of the working class, the government's inane policies and ruthless repression in the House of Lords on 27 February 1812: "I have been in some of the most oppressed provinces of Turkey; but never, under the most despotic of infidel governments, did I behold such squalid wretchedness as I have seen since my return, in the very heart of a Christian country". The British government sought to suppress the Luddite movement with a mass trial at York in January 1813, following the attack on Cartwrights mill at Rawfolds near Cleckheaton. The government charged over 60 men, including Mellor and his companions, with various crimes in connection with Luddite activities. While some of those charged were actual Luddites, many had no connection to the movement. Although the proceedings were legitimate jury trials, many were abandoned due to lack of evidence and 30 men were acquitted. These trials were certainly intended to act as show trials to deter other Luddites from continuing their activities. The harsh sentences of those found guilty, which included execution and penal transportation, quickly ended the movement. Parliament made "machine breaking" (i.e. industrial sabotage) a capital crime with the Frame Breaking Act of 1812. Lord Byron opposed this legislation, becoming one of the few prominent defenders of the Luddites after the treatment of the defendants at the York trials. Ironically, Lord Byron's only legitimate daughter Ada Lovelace would become the first computer programmer by combining the technology of the Analytical Engine with the Jacquard loom. In the 19th century, occupations that arose from the growth of trade and shipping in ports, also in "domestic" manufacturers, were notorious for precarious employment prospects. Underemployment was chronic during this period, and it was common practice to retain a larger workforce than was typically necessary for insurance against labour shortages in boom times. Moreover, the organization of manufacture by merchant-capitalists in the textile industry was inherently unstable. While the financiers' capital was still largely invested in raw material, it was easy to increase commitment where trade was good and almost as easy to cut back when times were bad. Merchant-capitalists lacked the incentive of later factory owners, whose capital was invested in building and plants, to maintain a steady rate of production and return on fixed capital. The combination of seasonal variations in wage rates and violent short-term fluctuations springing from harvests and war produced periodic outbreaks of violence. Nowadays, the term is often used to describe someone that is opposed or resistant to new technologies. In 1956, during a British Parliamentary debate, a Labour spokesman said that "organised workers were by no means wedded to a 'Luddite Philosophy'." More recently, the term Neo-Luddism has emerged to describe opposition to many forms of technology. According to a manifesto drawn up by the Second Luddite Congress (April 1996; Barnesville, Ohio), Neo-Luddism is "a leaderless movement of passive resistance to consumerism and the increasingly bizarre and frightening technologies of the Computer Age." The term "Luddite fallacy" is used by economists in reference to the fear that technological unemployment inevitably generates structural unemployment and is consequently macroeconomically injurious. If a technological innovation results in a reduction of necessary labour inputs in a given sector, then the industry-wide cost of production falls, which lowers the competitive price and increases the equilibrium supply point which, theoretically, will require an increase in aggregate labour inputs.
https://en.wikipedia.org/wiki?curid=17864
Anarcho-communism Anarcho-communism, also referred to as anarchist communism, communist anarchism, free communism, libertarian communism and stateless communism, is a political philosophy and anarchist school of thought which advocates the abolition of the state, capitalism, wage labour and private property (while retaining respect for personal property, along with collectively-owned items, goods and services) in favor of common ownership of the means of production and direct democracy as well as a horizontal network of workers' councils with production and consumption based on the guiding principle "From each according to his ability, to each according to his needs". Some forms of anarcho-communism such as insurrectionary anarchism are strongly influenced by egoism and radical individualism, believing anarcho-communism to be the best social system for the realization of individual freedom. Most anarcho-communists view anarcho-communism as a way of reconciling the opposition between the individual and society. Anarcho-communism developed out of radical socialist currents after the French Revolution, but it was first formulated as such in the Italian section of the First International. The theoretical work of Peter Kropotkin took importance later as it expanded and developed pro-organizationalist and insurrectionary anti-organizationalist sections. To date, the best-known examples of an anarcho-communist society (i.e. established around the ideas as they exist today and achieving worldwide attention and knowledge in the historical canon) are the anarchist territories during the Spanish Revolution and the Free Territory during the Russian Revolution, where anarchists such as Nestor Makhno worked to create and defend anarcho-communism through the Revolutionary Insurrectionary Army of Ukraine from 1918 before being conquered by the Bolsheviks in 1921. In 1929, anarcho-communism was achieved in Korea by the Korean Anarchist Federation in Manchuria (KAFM) and the Korean Anarcho-Communist Federation (KACF), with help from anarchist general and independence activist Kim Chwa-chin, lasting until 1931, when Imperial Japan assassinated Kim and invaded from the south, while the Chinese Nationalists invaded from the north, resulting in the creation of Manchukuo, a puppet state of the Empire of Japan. Through the efforts and influence of the Spanish anarchists during the Spanish Revolution within the Spanish Civil War starting in 1936, anarcho-communism existed in most of Aragon, parts of the Levante and Andalusia as well as in the stronghold of anarchist Catalonia before being crushed in 1939 by the combined forces of the Francoist Nationalists (the regime that won the war), Nationalist allies such as Adolf Hitler and Benito Mussolini and even Spanish Communist Party repression (backed by the Soviet Union) as well as economic and armaments blockades from the capitalist states and the Spanish Republic itself governed by the Republicans. Anarcho-communist currents appeared during the English Civil War and the French Revolution of the 17th and 18th centuries, respectively. Gerrard Winstanley, who was part of the radical Diggers movement in England, wrote in his 1649 pamphlet "The New Law of Righteousness" that there "shall be no buying or selling, no fairs nor markets, but the whole earth shall be a common treasury for every man" and "there shall be none Lord over others, but every one shall be a Lord of himself". The Diggers themselves resisted tyranny of the ruling class and of kings, instead operating in a cooperative fashion in order to get work done, manage supplies, and increase economic productivity. Due to the communes established by the Diggers being free from private property, along with economic exchange (all items, goods and services were held collectively), their communes could be called early, functioning communist societies, spread out across the rural lands of England. Prior to the Industrial Revolution, common ownership of land and property was much more prevalent across the European continent, but the Diggers were set apart by their struggle against monarchical rule. They sprung up by means of workers' self-management after the fall of Charles I. In 1703, Louis Armand, Baron de Lahontan wrote the novel "New Voyages to North America" where he outlined how indigenous communities of the North American continent cooperated and organised. The author found the agrarian societies and communities of pre-colonial North America to be nothing like the monarchical, unequal states of Europe, both in their economic structure and lack of any state. He wrote that the life natives lived was "anarchy", this being the first usage of the term to mean something other than chaos. He wrote that there were no priests, courts, laws, police, ministers of state, and no distinction of property, no way to differentiate rich from poor, as they were all equal and thriving cooperatively. During the French Revolution, Sylvain Maréchal, in his "Manifesto of the Equals" (1796), demanded "the communal enjoyment of the fruits of the earth" and looked forward to the disappearance of "the revolting distinction of rich and poor, of great and small, of masters and valets, of governors and governed". Maréchal was critical not only of the unequal distribution of property, but how religion would often be used to justify evangelical immorality. He viewed the link between religion and what later came to be known as capitalism (though not in his time) as two sides of the same corrupted coin. He had once said, "Do not be afraid of your God - be afraid of yourself. You are the creator of your own troubles and joys. Heaven and hell are in your own soul". Sylvain Maréchal was personally involved with the Conspiracy of the Equals, a failed attempt at overthrowing the monarchy of France and establishing a stateless, agrarian socialist utopia. He worked with Gracchus Babeuf in not only writing about what an anarchist country might look like, but how it will be achieved. The two of them were friends, though didn't always see eye to eye, particularly with Maréchal's statement on equality being more important than the arts. An early anarchist communist was Joseph Déjacque, the first person to describe himself as "libertarian". Unlike Proudhon, he argued that, "it is not the product of his or her labor that the worker has a right to, but to the satisfaction of his or her needs, whatever may be their nature". According to the anarchist historian Max Nettlau, the first use of the term libertarian communism was in November 1880, when a French anarchist congress employed it to more clearly identify its doctrines. The French anarchist journalist Sébastien Faure, later founder and editor of the four-volume "Anarchist Encyclopedia," started the weekly paper "Le Libertaire" ("The Libertarian") in 1895. Déjacque "rejected Blanquism, which was based on a division between the 'disciples of the great people's Architect' and 'the people, or vulgar herd,' and was equally opposed to all the variants of social republicanism, to the dictatorship of one man and to 'the dictatorship of the little prodigies of the proletariat.' With regard to the last of these, he wrote that: 'a dictatorial committee composed of workers is certainly the most conceited and incompetent, and hence the most anti-revolutionary, thing that can be found [...] (It is better to have doubtful enemies in power than dubious friends)'. He saw 'anarchic initiative,' 'reasoned will' and 'the autonomy of each' as the conditions for the social revolution of the proletariat, the first expression of which had been the barricades of June 1848 (see Revolutions of 1848). In Déjacque's view, a government resulting from an insurrection remains a reactionary fetter on the free initiative of the proletariat. Or rather, such free initiative can only arise and develop by the masses ridding themselves of the 'authoritarian prejudices' by means of which the state reproduces itself in its primary function of representation and delegation. Déjacque wrote that: 'By government I understand all delegation, all power outside the people,' for which must be substituted, in a process whereby politics is transcended, the 'people in direct possession of their sovereignty,' or the 'organised commune.' For Déjacque, the communist anarchist utopia would fulfil the function of inciting each proletarian to explore his or her own human potentialities, in addition to correcting the ignorance of the proletarians concerning 'social science'". As a coherent, modern economic-political philosophy, anarcho-communism was first formulated in the Italian section of the First International by Carlo Cafiero, Emilio Covelli, Errico Malatesta, Andrea Costa and other ex Mazzinian republicans. The collectivist anarchists advocated remuneration for the type and amount of labor adhering to the principle "to each according to deeds", but they held out the possibility of a post-revolutionary transition to a communist system of distribution according to need. As Mikhail Bakunin's associate James Guillaume put it in his essay "Ideas on Social Organization" (1876): "When [...] production comes to outstrip consumption [...] everyone will draw what he needs from the abundant social reserve of commodities, without fear of depletion; and the moral sentiment which will be more highly developed among free and equal workers will prevent, or greatly reduce, abuse and waste". The collectivist anarchists sought to collectivize ownership of the means of production while retaining payment proportional to the amount and kind of labor of each individual, but the anarcho-communists sought to extend the concept of collective ownership to the products of labor as well. While both groups argued against capitalism, the anarchist communists departed from Proudhon and Bakunin, who maintained that individuals have a right to the product of their individual labor and to be remunerated for their particular contribution to production. However, Errico Malatesta stated that "instead of running the risk of making a confusion in trying to distinguish what you and I each do, let us all work and put everything in common. In this way each will give to society all that his strength permits until enough is produced for every one; and each will take all that he needs, limiting his needs only in those things of which there is not yet plenty for every one". In "Anarchy and Communism" (1880), Carlo Cafiero explains that private property in the product of labor will lead to unequal accumulation of capital and therefore the reappearance of social classes and their antagonisms; and thus the resurrection of the state: "If we preserve the individual appropriation of the products of labour, we would be forced to preserve money, leaving more or less accumulation of wealth according to more or less merit rather than need of individuals". At the Florence Conference of the Italian Federation of the International in 1876, held in a forest outside Florence due to police activity, they declared the principles of anarcho-communism, beginning as follows: The above report was made in an article by Malatesta and Cafiero in the Swiss Jura Federation's bulletin later that year. Peter Kropotkin (1842–1921), often seen as the most important theorist of anarchist communism, outlined his economic ideas in "The Conquest of Bread" and "Fields, Factories and Workshops". Kropotkin felt that cooperation is more beneficial than competition, arguing in his major scientific work "" that this was well-illustrated in nature. He advocated the abolition of private property (while retaining respect for personal property) through the "expropriation of the whole of social wealth" by the people themselves, and for the economy to be co-ordinated through a horizontal network of voluntary associations where goods are distributed according to the physical needs of the individual, rather than according to labor. He further argued that these "needs," as society progressed, would not merely be physical needs but "[a]s soon as his material wants are satisfied, other needs, of an artistic character, will thrust themselves forward the more ardently. Aims of life vary with each and every individual; and the more society is civilized, the more will individuality be developed, and the more will desires be varied." He maintained that in anarcho-communism "houses, fields, and factories will no longer be private property, and that they will belong to the commune or the nation and money, wages, and trade would be abolished". Individuals and groups would use and control whatever resources they needed, as the aim of anarchist communism was to place "the product reaped or manufactured at the disposal of all, leaving to each the liberty to consume them as he pleases in his own home". He supported the expropriation of private property into the commons or public goods (while retaining respect for personal property) to ensure that everyone would have access to what they needed without being forced to sell their labour to get it, arguing: He said that a "peasant who is in possession of just the amount of land he can cultivate" and "a family inhabiting a house which affords them just enough space [...] considered necessary for that number of people" and the artisan "working with their own tools or handloom" would not be interfered with, arguing that "[t]he landlord owes his riches to the poverty of the peasants, and the wealth of the capitalist comes from the same source". In summation, Kropotkin described an anarchist communist economy as functioning like this: At the Berne conference of the International Workingmen's Association in 1876, the Italian anarchist Errico Malatesta argued that the revolution "consists more of deeds than words", and that action was the most effective form of propaganda. In the bulletin of the Jura Federation he declared "the Italian federation believes that the insurrectional fact, destined to affirm socialist principles by deed, is the most efficacious means of propaganda". As anarcho-communism emerged in the mid-19th century, it had an intense debate with Bakuninist collectivism and as such within the anarchist movement over participation in syndicalism and the workers movement as well as on other issues. So "In the theory of the revolution" of anarcho-communism as elaborated by Peter Kropotkin and others "it is the risen people who are the real agent and not the working class organised in the enterprise (the cells of the capitalist mode of production) and seeking to assert itself as labour power, as a more 'rational' industrial body or social brain (manager) than the employers". As a result, "between 1880 and 1890" with the "perspective of an immanent revolution", who was "opposed to the official workers' movement, which was then in the process of formation (general Social Democratisation). They were opposed not only to political (statist) struggles but also to strikes which put forward wage or other claims, or which were organised by trade unions." But "While they were not opposed to strikes as such, they were opposed to trade unions and the struggle for the eight-hour day. This anti-reformist tendency was accompanied by an anti-organisational tendency, and its partisans declared themselves in favour of agitation amongst the unemployed for the expropriation of foodstuffs and other articles, for the expropriatory strike and, in some cases, for 'individual recuperation' or acts of terrorism." Even after Peter Kropotkin and others overcame their initial reservations and decided to enter labor unions, there remained "the anti-syndicalist anarchist-communists, who in France were grouped around Sebastien Faure's "Le Libertaire". From 1905 onwards, the Russian counterparts of these anti-syndicalist anarchist-communists become partisans of economic terrorism and illegal 'expropriations'." Illegalism as a practice emerged and within it "[t]he acts of the anarchist bombers and assassins ("propaganda by the deed") and the anarchist burglars ("individual reappropriation") expressed their desperation and their personal, violent rejection of an intolerable society. Moreover, they were clearly meant to be exemplary, invitations to revolt." Proponents and activists of these tactics among others included Johann Most, Luigi Galleani, Victor Serge, Giuseppe Ciancabilla, and Severino Di Giovanni. The Italian Giuseppe Ciancabilla (1872–1904) wrote in "Against organization" that "we don't want tactical programs, and consequently we don't want organization. Having established the aim, the goal to which we hold, we leave every anarchist free to choose from the means that his sense, his education, his temperament, his fighting spirit suggest to him as best. We don't form fixed programs and we don't form small or great parties. But we come together spontaneously, and not with permanent criteria, according to momentary affinities for a specific purpose, and we constantly change these groups as soon as the purpose for which we had associated ceases to be, and other aims and needs arise and develop in us and push us to seek new collaborators, people who think as we do in the specific circumstance." By the 1880s, anarcho-communism was already present in the United States as can be seen in the publication of the journal "Freedom: A Revolutionary Anarchist-Communist Monthly" by Lucy Parsons and Lizzy Holmes. Lucy Parsons debated in her time in the US with fellow anarcho-communist Emma Goldman over issues of free love and feminism. Another anarcho-communist journal later appeared in the US called "The Firebrand". Most anarchist publications in the US were in Yiddish, German, or Russian, but "Free Society" was published in English, permitting the dissemination of anarchist communist thought to English-speaking populations in the US. Around that time these American anarcho-communist sectors entered in debate with the individualist anarchist group around Benjamin Tucker. In February 1888 Berkman left for the United States from his native Russia. Soon after his arrival in New York City, Berkman became an anarchist through his involvement with groups that had formed to campaign to free the men convicted of the 1886 Haymarket bombing. He as well as Emma Goldman soon came under the influence of Johann Most, the best-known anarchist in the United States; and an advocate of propaganda of the deed—"attentat", or violence carried out to encourage the masses to revolt. Berkman became a typesetter for Most's newspaper "Freiheit". According to anarchist historian Max Nettlau, the first use of the term "libertarian communism" was in November 1880, when a French anarchist congress employed it to more clearly identify its doctrines. The French anarchist journalist Sébastien Faure started the weekly paper "Le Libertaire" ("The Libertarian") in 1895. In Ukraine the anarcho-communist guerrilla leader Nestor Makhno led an independent anarchist army in Ukraine during the Russian Civil War. A commander of the peasant "Revolutionary Insurrectionary Army of Ukraine", also known as the "Anarchist Black Army", Makhno led a guerrilla campaign opposing both the Bolshevik "Reds" and monarchist "Whites". The revolutionary autonomous movement of which he was a part made various tactical military pacts while fighting various forces of reaction and organizing the Free Territory of Ukraine, an anarchist society, committed to resisting state authority, whether capitalist or Bolshevik. After successfully repelling Austro-Hungarian, White, and Ukrainian Nationalist forces, the Makhnovists militia forces and anarchist communist territories in the Ukraine were eventually crushed by Bolshevik military forces. In the Mexican Revolution the Mexican Liberal Party was established and during the early 1910s it led a series of military offensives leading to the conquest and occupation of certain towns and districts in Baja California with the leadership of anarcho-communist Ricardo Flores Magón. Kropotkin's "The Conquest of Bread", which Flores Magón considered a kind of anarchist bible, served as basis for the short-lived revolutionary communes in Baja California during the "Magónista" Revolt of 1911 During the Mexican Revolution Emiliano Zapata and his army and allies, including Pancho Villa, fought for agrarian reform in Mexico. Specifically, they wanted to establish communal land rights for Mexico's indigenous population, which had mostly lost its land to the wealthy elite of European descent. Zapata was partly influenced by Ricardo Flores Magón. The influence of Flores Magón on Zapata can be seen in the Zapatistas' Plan de Ayala, but even more noticeably in their slogan (this slogan was never used by Zapata) ""Tierra y libertad"" or "land and liberty", the title and maxim of Flores Magón's most famous work. Zapata's introduction to anarchism came via a local schoolteacher, Otilio Montaño Sánchez – later a general in Zapata's army, executed on May 17, 1917 – who exposed Zapata to the works of Peter Kropotkin and Flores Magón at the same time as Zapata was observing and beginning to participate in the struggles of the peasants for the land. A group of exiled Russian anarchists attempted to address and explain the anarchist movement's failures during the Russian Revolution. They wrote the "Organizational Platform of the General Union of Anarchists" which was written in 1926 by "Dielo Truda" ("Workers' Cause"). The pamphlet is an analysis of the basic anarchist beliefs, a vision of an anarchist society, and recommendations as to how an anarchist organization should be structured. The four main principles by which an anarchist organization should operate, according to the "Platform", are ideological unity, tactical unity, collective action, and federalism. The platform argues that "We have vital need of an organization which, having attracted most of the participants in the anarchist movement, would establish a common tactical and political line for anarchism and thereby serve as a guide for the whole movement". The Platform attracted strong criticism from many sectors on the anarchist movement of the time including some of the most influential anarchists such as Voline, Errico Malatesta, Luigi Fabbri, Camillo Berneri, Max Nettlau, Alexander Berkman, Emma Goldman and Gregori Maximoff. Malatesta, after initially opposing the Platform, later came to agreement with the Platform confirming that the original difference of opinion was due to linguistic confusion: "I find myself more or less in agreement with their way of conceiving the anarchist organisation (being very far from the authoritarian spirit which the "Platform" seemed to reveal) and I confirm my belief that behind the linguistic differences really lie identical positions." Two texts were made by the anarchist communists Sébastien Faure and Volin as responses to the Platform, each proposing different models, are the basis for what became known as the organisation of synthesis, or simply "synthesism". Voline published in 1924 a paper calling for "the anarchist synthesis" and was also the author of the article in Sébastien Faure's "Encyclopedie Anarchiste" on the same topic. The main purpose behind the synthesis was that the anarchist movement in most countries was divided into three main tendencies: communist anarchism, anarcho-syndicalism, and individualist anarchism and so such an organization could contain anarchists of this three tendencies very well. Faure in his text "Anarchist synthesis" has the view that "these currents were not contradictory but complementary, each having a role within anarchism: anarcho-syndicalism as the strength of the mass organisations and the best way for the practice of anarchism; libertarian communism as a proposed future society based on the distribution of the fruits of labour according to the needs of each one; anarcho-individualism as a negation of oppression and affirming the individual right to development of the individual, seeking to please them in every way. The Dielo Truda platform in Spain also met with strong criticism. Miguel Jimenez, a founding member of the Iberian Anarchist Federation (FAI), summarized this as follows: too much influence in it of marxism, it erroneously divided and reduced anarchists between individualist anarchists and anarcho-communist sections, and it wanted to unify the anarchist movement along the lines of the anarcho-communists. He saw anarchism as more complex than that, that anarchist tendencies are not mutually exclusive as the platformists saw it and that both individualist and communist views could accommodate anarchosyndicalism. Sébastian Faure had strong contacts in Spain and so his proposal had more impact in Spanish anarchists than the Dielo Truda platform even though individualist anarchist influence in Spain was less strong than it was in France. The main goal there was conciling anarcho-communism with anarcho-syndicalism. Gruppo Comunista Anarchico di Firenze held that the during early twentieth century, the terms "libertarian communism" and "anarchist communism" became synonymous within the international anarchist movement as a result of the close connection they had in Spain (see Anarchism in Spain) (with "libertarian communism" becoming the prevalent term). The most extensive application of anarcho-communist ideas (i.e. established around the ideas as they exist today and achieving worldwide attention and knowledge in the historical canon) happened in the anarchist territories during the Spanish Revolution. In Spain, the national anarcho-syndicalist trade union Confederación Nacional del Trabajo initially refused to join a popular front electoral alliance, and abstention by CNT supporters led to a right-wing election victory. In 1936, the CNT changed its policy and anarchist votes helped bring the popular front back to power. Months later, the former ruling class responded with an attempted coup causing the Spanish Civil War (1936–1939). In response to the army rebellion, an anarchist-inspired movement of peasants and workers, supported by armed militias, took control of Barcelona and of large areas of rural Spain where they collectivised the land, but even before the fascist victory in 1939 the anarchists were losing ground in a bitter struggle with the Stalinists, who controlled the distribution of military aid to the Republican cause from the Soviet Union. The events known as the Spanish Revolution was a workers' social revolution that began during the outbreak of the Spanish Civil War in 1936 and resulted in the widespread implementation of anarchist and more broadly libertarian socialist organizational principles throughout various portions of the country for two to three years, primarily Catalonia, Aragon, Andalusia, and parts of the Levante. Much of Spain's economy was put under worker control; in anarchist strongholds like Catalonia, the figure was as high as 75%, but lower in areas with heavy Communist Party of Spain influence, as the Soviet-allied party actively resisted attempts at collectivization enactment. Factories were run through worker committees, agrarian areas became collectivised and run as libertarian communes. Anarchist historian Sam Dolgoff estimated that about eight million people participated directly or at least indirectly in the Spanish Revolution, which he claimed "came closer to realizing the ideal of the free stateless society on a vast scale than any other revolution in history". Stalinist-led troops suppressed the collectives and persecuted both dissident Marxists and anarchists. Although every sector of the stateless parts of Spain had underwent workers' self-management, collectivisation of agricultural and industrial production, and in parts using money or some degree of private property, a heavy regulation of markets by democratic communities, there were other areas throughout Spain that used no money at all, and followed principles in accordance with, "From each according to his ability, to each according to his needs". One such example was the libertarian communist village of Alcora in the Valencian Community, where money was entirely absent, and distribution of properties and services was done based upon needs, not who could afford them. There was no distinction between rich and poor, and everyone held everything in common. Buildings that used to function as shops were made storehouses, where instead of buying and selling, which didn't exist in Alcora during the war, they were centers for distribution, where everyone took freely without paying. Labour was only conducted for enjoyment, with levels of productivity, quality of life, and general prosperity having dramatically risen after the fall of markets. Common ownership of property allowed for each inhabitant of the village to fulfil their needs without lowering themselves for the sake of profit, and each individual living in Alcora found themselves as ungoverned, anarchists free of rulers and private property. Anarcho-communism entered into internal debates once again over the issue of organization in the post-World War II era. Founded in October 1935 the Anarcho-Communist Federation of Argentina (FACA, Federación Anarco-Comunista Argentina) in 1955 renamed itself as the Argentine Libertarian Federation. The Fédération Anarchiste (FA) was founded in Paris on December 2, 1945, and elected the platformist anarcho-communist George Fontenis as its first secretary the next year. It was composed of a majority of activists from the former FA (which supported Voline's Synthesis) and some members of the former Union Anarchiste, which supported the CNT-FAI support to the Republican government during the Spanish Civil War, as well as some young Resistants. In 1950 a clandestine group formed within the FA called Organisation Pensée Bataille (OPB) led by George Fontenis. The "Manifesto of Libertarian Communism" was written in 1953 by Georges Fontenis for the "Federation Communiste Libertaire" of France. It is one of the key texts of the anarchist-communist current known as platformism. The OPB pushed for a move which saw the FA change its name into the Fédération Communiste Libertaire (FCL) after the 1953 Congress in Paris, while an article in "Le Libertaire" indicated the end of the cooperation with the French Surrealist Group led by André Breton. The new decision making process was founded on unanimity: each person has a right of veto on the orientations of the federation. The FCL published the same year the "Manifeste du communisme libertaire". Several groups quit the FCL in December 1955, disagreeing with the decision to present "revolutionary candidates" to the legislative elections. On August 15–20, 1954, the Ve intercontinental plenum of the CNT took place. A group called Entente anarchiste appeared which was formed of militants who didn't like the new ideological orientation that the OPB was giving the FCL seeing it was authoritarian and almost marxist. The FCL lasted until 1956 just after it participated in state legislative elections with 10 candidates. This move alienated some members of the FCL and thus produced the end of the organization. A group of militants who didn't agree with the FA turning into FCL reorganized a new Federation Anarchiste which was established in December 1953. This included those who formed "L'Entente anarchiste" who joined the new FA and then dissolved L'Entente. The new base principles of the FA were written by the individualist anarchist Charles-Auguste Bontemps and the non-platformist anarcho-communist Maurice Joyeux which established an organization with a plurality of tendencies and autonomy of groups organized around synthesist principles. According to historian Cédric Guérin, "the unconditional rejection of Marxism became from that moment onwards an identity element of the new Federation Anarchiste" and this was motivated in a big part after the previous conflict with George Fontenis and his OPB. In Italy, the Italian Anarchist Federation was founded in 1945 in Carrara. It adopted an "Associative Pact" and the "Anarchist Program" of Errico Malatesta. It decided to publish the weekly "Umanità Nova" retaking the name of the journal published by Errico Malatesta. Inside the FAI, the Anarchist Groups of Proletarian Action (GAAP) was founded, led by Pier Carlo Masini, which "proposed a Libertarian Party with an anarchist theory and practice adapted to the new economic, political and social reality of post-war Italy, with an internationalist outlook and effective presence in the workplaces [...] The GAAP allied themselves with the similar development within the French Anarchist movement" as led by George Fontenis. Another tendency which didn't identify either with the more classical FAI or with the GAAP started to emerge as local groups. These groups emphasized direct action, informal affinity groups and expropriation for financing anarchist activity. From within these groups the influential insurrectionary anarchist Alfredo Maria Bonanno will emerge influenced by the practice of the Spanish exiled anarchist José Lluis Facerías. In the early seventies a platformist tendency emerged within the Italian Anarchist Federation which argued for more strategic coherence and social insertion in the workers movement while rejecting the syntesist "Associative Pact" of Malatesta which the FAI adhered to. These groups started organizing themselves outside the FAI in organizations such as O.R.A. from Liguria which organized a Congress attended by 250 delegates of grupos from 60 locations. This movement was influential in the "autonomia" movements of the seventies. They published "Fronte Libertario" della lotta di classe in Bologna and "Comunismo libertario" from Modena. The Federation of Anarchist Communists (Federazione dei Comunisti Anarchici), or FdCA, was established in 1985 in Italy from the fusion of the "Organizzazione Rivoluzionaria Anarchica" ("Revolutionary Anarchist Organisation") and the "Unione dei Comunisti Anarchici della Toscana" ("Tuscan Union of Anarchist Communists"). The International of Anarchist Federations (IAF/IFA) was founded during an international anarchist conference in Carrara in 1968 by the three existing European anarchist federations of France (Fédération Anarchiste), Italy (Federazione Anarchica Italiana) and Spain (Federación Anarquista Ibérica) as well as the Bulgarian federation in French exile. These organizations were also inspired on synthesist principles. "Libertarian Communism" was a socialist journal founded in 1974 and produced in part by members of the Socialist Party of Great Britain. The synthesist Italian Anarchist Federation and the platformist Federation of Anarchist Communists continue existing today in Italy but insurrectionary anarchism continues to be relevant as the recent establishment of the Informal Anarchist Federation shows. In the 1970s, the French Fédération Anarchiste evolved into a joining of the principles of both synthesis anarchism and platformism but later the platformist organizations Libertarian Communist Organization (France) in 1976 and Alternative libertaire in 1991 appeared with this last one existing until today alongside the synthesist Fédération Anarchiste. In recent times platformist organisations founded the now-defunct International Libertarian Solidarity network and its successor, the Anarkismo network; which is run collaboratively by roughly 30 platformist organisations around the world. On the other hand, contemporary insurrectionary anarchism inherits the views and tactics of anti-organizational anarcho-communism and illegalism. The Informal Anarchist Federation (not to be confused with the synthesist Italian Anarchist Federation also FAI) is an Italian insurrectionary anarchist organization. It has been described by Italian intelligence sources as a "horizontal" structure of various anarchist terrorist groups, united in their beliefs in revolutionary armed action. In 2003, the group claimed responsibility for a bomb campaign targeting several European Union institutions. Currently, alongside the previously mentioned federations, the International of Anarchist Federations includes the Argentine Libertarian Federation, the Anarchist Federation of Belarus, the Federation of Anarchists in Bulgaria, the Czech-Slovak Anarchist Federation, the Federation of German speaking Anarchists in Germany and Switzerland, and the Anarchist Federation in the United Kingdom and Ireland. The abolition of money, prices, and wage labor is central to anarchist communism. With distribution of wealth being based on self-determined needs, people would be free to engage in whatever activities they found most fulfilling and would no longer have to engage in work for which they have neither the temperament nor the aptitude. Anarcho-communists argue that there is no valid way of measuring the value of any one person's economic contributions because all wealth is a common product of current and preceding generations. For instance, one could not measure the value of a factory worker's daily production without taking into account how transportation, food, water, shelter, relaxation, machine efficiency, emotional mood etc. contributed to their production. To truly give numerical economic value to anything, an overwhelming amount of externalities and contributing factors would need to be taken into account – especially current or past labor contributing to the ability to utilize future labor. As Kropotkin put it: "No distinction can be drawn between the work of each man. Measuring the work by its results leads us to absurdity; dividing and measuring them by hours spent on the work also leads us to absurdity. One thing remains: put the needs above the works, and first of all recognize the right to live, and later on, to the comforts of life, for all those who take their share in production.." Communist anarchism shares many traits with collectivist anarchism, but the two are distinct. Collectivist anarchism believes in collective ownership while communist anarchism negates the entire concept of ownership in favor of the concept of usage. Crucially, the abstract relationship of "landlord" and "tenant" would no longer exist, as such titles are held to occur under conditional legal coercion and are not absolutely necessary to occupy buildings or spaces (intellectual property rights would also cease, since they are a form of private property). In addition to believing rent and other fees are exploitative, anarcho-communists feel these are arbitrary pressures inducing people to carry out unrelated functions. For example, they question why one should have to work for 'X hours' a day to merely live somewhere. So instead of working conditionally for the sake of the wage earned, they believe in working directly for the objective at hand. Anarchist communists reject the claim that wage labor is necessary because people are lazy and selfish by "human nature". They often point out that even the so-called "idle rich" sometimes find useful things to do despite having all their needs satisfied by the labour of others. Anarcho-communists generally do not agree with the belief in a pre-set "human nature", arguing that human culture and behavior is very largely determined by socialization and the mode of production. Many anarchist communists, like Peter Kropotkin, also believe that human evolutionary tendency is for humans to cooperate with each other for mutual benefit and survival instead of existing as lone competitors. While anarchist communists such as Peter Kropotkin and Murray Bookchin believed that the members of such a society would voluntarily perform all necessary labour because they would recognize the benefits of communal enterprise and mutual aid,
https://en.wikipedia.org/wiki?curid=17865
London London is the capital and largest city of England and the United Kingdom. Standing on the River Thames in the south-east of England, at the head of its estuary leading to the North Sea, London has been a major settlement for two millennia. "Londinium" was founded by the Romans. The City of London, London's ancient core − an area of just and colloquially known as the Square Mile − retains boundaries that closely follow its medieval limits. The City of Westminster is also an Inner London borough holding city status. London is governed by the Mayor of London and the London Assembly. London is considered to be one of the world's most important global cities and has been called the world's most powerful, most desirable, most influential, most visited, most expensive, sustainable, most investment-friendly, and most-popular-for-work city. It exerts a considerable impact upon the arts, commerce, education, entertainment, fashion, finance, healthcare, media, professional services, research and development, tourism and transportation. London ranks 26th out of 300 major cities for economic performance. It is one of the largest financial centres and has either the fifth- or the sixth-largest metropolitan area GDP. It is the most-visited city as measured by international arrivals and has the busiest city airport system as measured by passenger traffic. It is the leading investment destination, hosting more international retailers and ultra high-net-worth individuals than any other city. London's universities form the largest concentration of higher education institutes in Europe, and London is home to highly ranked institutions such as Imperial College London in natural and applied sciences, the London School of Economics in social sciences, and the comprehensive University College London and King's College London. In 2012, London became the first city to have hosted three modern Summer Olympic Games. London has a diverse range of people and cultures, and more than 300 languages are spoken in the region. Its estimated mid-2018 municipal population (corresponding to Greater London) was 8,908,081, the third most populous of any city in Europe and accounts for 13.4% of the UK population. London's urban area is the third most populous in Europe, after Moscow and Paris, with 9,787,426 inhabitants at the 2011 census. The London commuter belt is the second-most populous in Europe, after the Moscow Metropolitan Area, with 14,040,163 inhabitants in 2016. London contains four World Heritage Sites: the Tower of London; Kew Gardens; the site comprising the Palace of Westminster, Westminster Abbey, and St Margaret's Church; and the historic settlement in Greenwich where the Royal Observatory, Greenwich defines the Prime Meridian (0° longitude) and Greenwich Mean Time. Other landmarks include Buckingham Palace, the London Eye, Piccadilly Circus, St Paul's Cathedral, Tower Bridge, Trafalgar Square and The Shard. London has numerous museums, galleries, libraries and sporting events. These include the British Museum, National Gallery, Natural History Museum, Tate Modern, British Library and West End theatres. The London Underground is the oldest underground railway network in the world. "London" is an ancient name, already attested in the first century AD, usually in the Latinised form "Londinium"; for example, handwritten Roman tablets recovered in the city originating from AD 65/70-80 include the word "Londinio" ("in London"). Over the years, the name has attracted many mythicising explanations. The earliest attested appears in Geoffrey of Monmouth's "Historia Regum Britanniae", written around 1136. This had it that the name originated from a supposed King Lud, who had allegedly taken over the city and named it "Kaerlud". Modern scientific analyses of the name must account for the origins of the different forms found in early sources: Latin (usually "Londinium"), Old English (usually "Lunden"), and Welsh (usually "Llundein"), with reference to the known developments over time of sounds in those different languages. It is agreed that the name came into these languages from Common Brythonic; recent work tends to reconstruct the lost Celtic form of the name as *[Londonjon] or something similar. This was adapted into Latin as "Londinium" and borrowed into Old English, the ancestor-language of English. The toponymy of the Common Brythonic form is much debated. A prominent explanation was Richard Coates' 1998 argument that the name derived from pre-Celtic Old European *"(p)lowonida", meaning "river too wide to ford". Coates suggested that this was a name given to the part of the River Thames which flows through London; from this, the settlement gained the Celtic form of its name, *"Lowonidonjon". However, most work has accepted a Celtic origin for the name, and recent studies have favoured an explanation along the lines of a Celtic derivative of a Proto-Indo-European root *"lendh-" ('sink, cause to sink'), combined with the Celtic suffix *-"injo"- or *-"onjo"- (used to form place-names). Peter Schrijver has specifically suggested, on these grounds, that the name originally meant 'place that floods (periodically, tidally)'. Until 1889, the name "London" applied officially to the City of London, but since then it has also referred to the County of London and Greater London. In writing, "London" is, on ocassion, colloquially contracted to "LDN". Such usage originated in SMS language, and is often found, on a social media user profile, suffixing an alias or handle. In 1993, the remains of a Bronze Age bridge were found on the south foreshore, upstream of Vauxhall Bridge. This bridge either crossed the Thames or reached a now lost island in it. Two of those timbers were radiocarbon dated to between 1750 BC and 1285 BC. In 2010, the foundations of a large timber structure, dated to between 4800 BC and 4500 BC, were found on the Thames's south foreshore, downstream of Vauxhall Bridge. The function of the mesolithic structure is not known. Both structures are on the south bank where the River Effra flows into the Thames. Although there is evidence of scattered Brythonic settlements in the area, the first major settlement was founded by the Romans about four years after the invasion of AD 43. This lasted only until around AD 61, when the Iceni tribe led by Queen Boudica stormed it, burning the settlement to the ground. The next, heavily planned, incarnation of Londinium prospered, and it superseded Colchester as the capital of the Roman province of Britannia in 100. At its height in the 2nd century, Roman London had a population of around 60,000. With the collapse of Roman rule in the early 5th century, London ceased to be a capital, and the walled city of Londinium was effectively abandoned, although Roman civilisation continued in the area of St Martin-in-the-Fields until around 450. From around 500, an Anglo-Saxon settlement known as Lundenwic developed slightly west of the old Roman city. By about 680, the city had regrown into a major port, although there is little evidence of large-scale production. From the 820s repeated Viking assaults brought decline. Three are recorded; those in 851 and 886 succeeded, while the last, in 994, was rebuffed. The Vikings established Danelaw over much of eastern and northern England; its boundary stretched roughly from London to Chester. It was an area of political and geographical control imposed by the Viking incursions which was formally agreed by the Danish warlord, Guthrum and the West Saxon king Alfred the Great in 886. The "Anglo-Saxon Chronicle" recorded that Alfred "refounded" London in 886. Archaeological research shows that this involved abandonment of Lundenwic and a revival of life and trade within the old Roman walls. London then grew slowly until about 950, after which activity increased dramatically. By the 11th century, London was beyond all comparison the largest town in England. Westminster Abbey, rebuilt in the Romanesque style by King Edward the Confessor, was one of the grandest churches in Europe. Winchester had previously been the capital of Anglo-Saxon England, but from this time on, London became the main forum for foreign traders and the base for defence in time of war. In the view of Frank Stenton: "It had the resources, and it was rapidly developing the dignity and the political self-consciousness appropriate to a national capital." After winning the Battle of Hastings, William, Duke of Normandy was crowned King of England in the newly completed Westminster Abbey on Christmas Day 1066. William constructed the Tower of London, the first of the many Norman castles in England to be rebuilt in stone, in the southeastern corner of the city, to intimidate the native inhabitants. In 1097, William II began the building of Westminster Hall, close by the abbey of the same name. The hall became the basis of a new Palace of Westminster. In the 12th century, the institutions of central government, which had hitherto accompanied the royal English court as it moved around the country, grew in size and sophistication and became increasingly fixed in one place. For most purposes this was Westminster, although the royal treasury, having been moved from Winchester, came to rest in the Tower. While the City of Westminster developed into a true capital in governmental terms, its distinct neighbour, the City of London, remained England's largest city and principal commercial centre, and it flourished under its own unique administration, the Corporation of London. In 1100, its population was around 18,000; by 1300 it had grown to nearly 100,000. Disaster struck in the form of the Black Death in the mid-14th century, when London lost nearly a third of its population. London was the focus of the Peasants' Revolt in 1381. London was also a centre of England's Jewish population before their expulsion by Edward I in 1290. Violence against Jews took place in 1190, after it was rumoured that the new king had ordered their massacre after they had presented themselves at his coronation. In 1264 during the Second Barons' War, Simon de Montfort's rebels killed 500 Jews while attempting to seize records of debts. During the Tudor period the Reformation produced a gradual shift to Protestantism, and much of London property passed from church to private ownership, which accelerated trade and business in the city. In 1475, the Hanseatic League set up its main trading base ("kontor") of England in London, called the "Stalhof" or "Steelyard". It existed until 1853, when the Hanseatic cities of Lübeck, Bremen and Hamburg sold the property to South Eastern Railway. Woollen cloth was shipped undyed and undressed from 14th/15th century London to the nearby shores of the Low Countries, where it was considered indispensable. But the reach of English maritime enterprise hardly extended beyond the seas of north-west Europe. The commercial route to Italy and the Mediterranean Sea normally lay through Antwerp and over the Alps; any ships passing through the Strait of Gibraltar to or from England were likely to be Italian or Ragusan. Upon the re-opening of the Netherlands to English shipping in January 1565, there ensued a strong outburst of commercial activity. The Royal Exchange was founded. Mercantilism grew, and monopoly trading companies such as the East India Company were established, with trade expanding to the New World. London became the principal North Sea port, with migrants arriving from England and abroad. The population rose from an estimated 50,000 in 1530 to about 225,000 in 1605. In the 16th century William Shakespeare and his contemporaries lived in London at a time of hostility to the development of the theatre. By the end of the Tudor period in 1603, London was still very compact. There was an assassination attempt on James I in Westminster, in the Gunpowder Plot on 5 November 1605. In 1637, the government of Charles I attempted to reform administration in the area of London. The plan called for the Corporation of the City to extend its jurisdiction and administration over expanding areas around the City. Fearing an attempt by the Crown to diminish the Liberties of London, a lack of interest in administering these additional areas, or concern by city guilds of having to share power, the Corporation refused. Later called "The Great Refusal", this decision largely continues to account for the unique governmental status of the City. In the English Civil War the majority of Londoners supported the Parliamentary cause. After an initial advance by the Royalists in 1642, culminating in the battles of Brentford and Turnham Green, London was surrounded by a defensive perimeter wall known as the Lines of Communication. The lines were built by up to 20,000 people, and were completed in under two months. The fortifications failed their only test when the New Model Army entered London in 1647, and they were levelled by Parliament the same year. London was plagued by disease in the early 17th century, culminating in the Great Plague of 1665–1666, which killed up to 100,000 people, or a fifth of the population. The Great Fire of London broke out in 1666 in Pudding Lane in the city and quickly swept through the wooden buildings. Rebuilding took over ten years and was supervised by Robert Hooke as Surveyor of London. In 1708 Christopher Wren's masterpiece, St Paul's Cathedral was completed. During the Georgian era, new districts such as Mayfair were formed in the west; new bridges over the Thames encouraged development in South London. In the east, the Port of London expanded downstream. London's development as an international financial centre matured for much of the 1700s. In 1762, George III acquired Buckingham House and it was enlarged over the next 75 years. During the 18th century, London was dogged by crime, and the Bow Street Runners were established in 1750 as a professional police force. In total, more than 200 offences were punishable by death, including petty theft. Most children born in the city died before reaching their third birthday. The coffeehouse became a popular place to debate ideas, with growing literacy and the development of the printing press making news widely available; and Fleet Street became the centre of the British press. Following the invasion of Amsterdam by Napoleonic armies, many financiers relocated to London, especially a large Jewish community, and the first London international issue was arranged in 1817. Around the same time, the Royal Navy became the world leading war fleet, acting as a serious deterrent to potential economic adversaries of the United Kingdom. The repeal of the Corn Laws in 1846 was specifically aimed at weakening Dutch economic power. London then overtook Amsterdam as the leading international financial centre. In 1888, London became home to a series of murders by a man known only as Jack the Ripper and It has since become one of the world's most famous unsolved mysteries. According to Samuel Johnson: London was the world's largest city from 1831 to 1925, with a population density of 325 people per hectare. London's overcrowded conditions led to cholera epidemics, claiming 14,000 lives in 1848, and 6,000 in 1866. Rising traffic congestion led to the creation of the world's first local urban rail network. The Metropolitan Board of Works oversaw infrastructure expansion in the capital and some of the surrounding counties; it was abolished in 1889 when the London County Council was created out of those areas of the counties surrounding the capital. London was bombed by the Germans during the First World War, and during the Second World War, the Blitz and other bombings by the German "Luftwaffe" killed over 30,000 Londoners, destroying large tracts of housing and other buildings across the city. Immediately after the War, the 1948 Summer Olympics were held at the original Wembley Stadium, at a time when London was still recovering from the war. From the 1940s onwards, London became home to many immigrants, primarily from Commonwealth countries such as Jamaica, India, Bangladesh and Pakistan, making London one of the most diverse cities worldwide. In 1951, the Festival of Britain was held on the South Bank. The Great Smog of 1952 led to the Clean Air Act 1956, which ended the "pea soup fogs" for which London had been notorious. Primarily starting in the mid-1960s, London became a centre for the worldwide youth culture, exemplified by the Swinging London subculture associated with the King's Road, Chelsea and Carnaby Street. The role of trendsetter was revived during the punk era. In 1965 London's political boundaries were expanded to take into account the growth of the urban area and a new Greater London Council was created. During The Troubles in Northern Ireland, London was subjected to bombing attacks by the Provisional Irish Republican Army for two decades, starting with the Old Bailey bombing in 1973. Racial inequality was highlighted by the 1981 Brixton riot. Greater London's population declined steadily in the decades after the Second World War, from an estimated peak of 8.6 million in 1939 to around 6.8 million in the 1980s. The principal ports for London moved downstream to Felixstowe and Tilbury, with the London Docklands area becoming a focus for regeneration, including the Canary Wharf development. This was borne out of London's ever-increasing role as a major international financial centre during the 1980s. The Thames Barrier was completed in the 1980s to protect London against tidal surges from the North Sea. The Greater London Council was abolished in 1986, which left London without a central administration until 2000 when London-wide government was restored, with the creation of the Greater London Authority. To celebrate the start of the 21st century, the Millennium Dome, London Eye and Millennium Bridge were constructed. On 6 July 2005 London was awarded the 2012 Summer Olympics, making London the first city to stage the Olympic Games three times. On 7 July 2005, three London Underground trains and a double-decker bus were bombed in a series of terrorist attacks. In 2008, "Time" named London alongside New York City and Hong Kong as Nylonkong, hailing it as the world's three most influential global cities. In January 2015, Greater London's population was estimated to be 8.63 million, the highest level since 1939. During the Brexit referendum in 2016, the UK as a whole decided to leave the European Union, but a majority of London constituencies voted to remain in the EU. The administration of London is formed of two tiers: a citywide, strategic tier and a local tier. Citywide administration is coordinated by the Greater London Authority (GLA), while local administration is carried out by 33 smaller authorities. The GLA consists of two elected components: the mayor of London, who has executive powers, and the London Assembly, which scrutinises the mayor's decisions and can accept or reject the mayor's budget proposals each year. The headquarters of the GLA is City Hall, Southwark. The mayor since 2016 has been Sadiq Khan, the first Muslim mayor of a major Western capital. The mayor's statutory planning strategy is published as the London Plan, which was most recently revised in 2011. The local authorities are the councils of the 32 London boroughs and the City of London Corporation. They are responsible for most local services, such as local planning, schools, social services, local roads and refuse collection. Certain functions, such as waste management, are provided through joint arrangements. In 2009–2010 the combined revenue expenditure by London councils and the GLA amounted to just over £22 billion (£14.7 billion for the boroughs and £7.4 billion for the GLA). The London Fire Brigade is the statutory fire and rescue service for Greater London. It is run by the London Fire and Emergency Planning Authority and is the third largest fire service in the world. National Health Service ambulance services are provided by the London Ambulance Service (LAS) NHS Trust, the largest free-at-the-point-of-use emergency ambulance service in the world. The London Air Ambulance charity operates in conjunction with the LAS where required. Her Majesty's Coastguard and the Royal National Lifeboat Institution operate on the River Thames, which is under the jurisdiction of the Port of London Authority from Teddington Lock to the sea. London is the seat of the Government of the United Kingdom. Many government departments, as well as the prime minister's residence at 10 Downing Street, are based close to the Palace of Westminster, particularly along Whitehall. There are 73 members of Parliament (MPs) from London, elected from local parliamentary constituencies in the national Parliament. , 49 are from the Labour Party, 21 are Conservatives, and three are Liberal Democrat. The ministerial post of minister for London was created in 1994. The current Minister for London is Paul Scully MP. Policing in Greater London, with the exception of the City of London, is provided by the Metropolitan Police, overseen by the mayor through the Mayor's Office for Policing and Crime (MOPAC). The City of London has its own police force – the City of London Police. The British Transport Police are responsible for police services on National Rail, London Underground, Docklands Light Railway and Tramlink services. A fourth police force in London, the Ministry of Defence Police, do not generally become involved with policing the general public. Crime rates vary widely by area, ranging from parts with serious issues to parts considered very safe. Today crime figures are made available nationally at Local Authority and Ward level. In 2015, there were 118 homicides, a 25.5% increase over 2014. The Metropolitan Police have made detailed crime figures, broken down by category at borough and ward level, available on their website since 2000. Recorded crime has been rising in London, notably violent crime and murder by stabbing and other means have risen. There have been 50 murders from the start of 2018 to mid April 2018. Funding cuts to police in London are likely to have contributed to this, though other factors are also involved. London, also referred to as Greater London, is one of nine regions of England and the top-level subdivision covering most of the city's metropolis. The small ancient City of London at its core once comprised the whole settlement, but as its urban area grew, the Corporation of London resisted attempts to amalgamate the city with its suburbs, causing "London" to be defined in a number of ways for different purposes. Forty per cent of Greater London is covered by the London post town, within which 'LONDON' forms part of postal addresses. The London telephone area code (020) covers a larger area, similar in size to Greater London, although some outer districts are excluded and some places just outside are included. The Greater London boundary has been aligned to the M25 motorway in places. Outward urban expansion is now prevented by the Metropolitan Green Belt, although the built-up area extends beyond the boundary in places, resulting in a separately defined Greater London Urban Area. Beyond this is the vast London commuter belt. Greater London is split for some purposes into Inner London and Outer London. The city is split by the River Thames into North and South, with an informal central London area in its interior. The coordinates of the nominal centre of London, traditionally considered to be the original Eleanor Cross at Charing Cross near the junction of Trafalgar Square and Whitehall, are about . However the geographical centre of London, on one definition, is in the London Borough of Lambeth, just 0.1 miles to the northeast of Lambeth North tube station. Within London, both the City of London and the City of Westminster have city status and both the City of London and the remainder of Greater London are counties for the purposes of lieutenancies. The area of Greater London includes areas that are part of the historic counties of Middlesex, Kent, Surrey, Essex and Hertfordshire. London's status as the capital of England, and later the United Kingdom, has never been granted or confirmed officially—by statute or in written form. Its position was formed through constitutional convention, making its status as "de facto" capital a part of the UK's uncodified constitution. The capital of England was moved to London from Winchester as the Palace of Westminster developed in the 12th and 13th centuries to become the permanent location of the royal court, and thus the political capital of the nation. More recently, Greater London has been defined as a region of England and in this context is known as "London". Greater London encompasses a total area of , an area which had a population of 7,172,036 in 2001 and a population density of . The extended area known as the London Metropolitan Region or the London Metropolitan Agglomeration, comprises a total area of has a population of 13,709,000 and a population density of . Modern London stands on the Thames, its primary geographical feature, a navigable river which crosses the city from the south-west to the east. The Thames Valley is a floodplain surrounded by gently rolling hills including Parliament Hill, Addington Hills, and Primrose Hill. Historically London grew up at the lowest bridging point on the Thames. The Thames was once a much broader, shallower river with extensive marshlands; at high tide, its shores reached five times their present width. Since the Victorian era the Thames has been extensively embanked, and many of its London tributaries now flow underground. The Thames is a tidal river, and London is vulnerable to flooding. The threat has increased over time because of a slow but continuous rise in high water level by the slow 'tilting' of the British Isles (up in Scotland and Northern Ireland and down in southern parts of England, Wales and Ireland) caused by post-glacial rebound. In 1974 a decade of work began on the construction of the Thames Barrier across the Thames at Woolwich to deal with this threat. While the barrier is expected to function as designed until roughly 2070, concepts for its future enlargement or redesign are already being discussed. London has a temperate oceanic climate (Köppen: "Cfb ") receiving less precipitation than Rome, Bordeaux, Lisbon, Naples, Sydney or New York City. Rainfall records have been kept in the city since at least 1697, when records began at Kew. At Kew, the most rainfall in one month is in November 1755 and the least is in both December 1788 and July 1800. Mile End also had in April 1893. The wettest year on record is 1903, with a total fall of and the driest is 1921, with a total fall of . Temperature extremes in London range from at Kew during August 2003 down to . However, an unofficial reading of was reported on 3 January 1740. Conversely, the highest unofficial temperature ever known to be recorded in the United Kingdom occurred in London in the 1808 heat wave. The temperature was recorded at on 13 July. It is thought that this temperature, if accurate, is one of the highest temperatures of the millennium in the United Kingdom. It is thought that only days in 1513 and 1707 could have beaten this. Since records began in London (first at Greenwich in 1841), the warmest month on record is July 1868, with a mean temperature of at Greenwich whereas the coldest month is December 2010, with a mean temperature of at Northolt. Records for atmospheric pressure have been kept at London since 1692. The highest pressure ever reported is on 20 January 2020, and the lowest is on 25 December 1821. Summers are generally warm, sometimes hot. London's average July high is 24 °C (74 °F). On average each year, London experiences 31 days above and 4.2 days above every year. During the 2003 European heat wave there were 14 consecutive days above and 2 consecutive days when temperatures reached , leading to hundreds of heat-related deaths. There was also a previous spell of 15 consecutive days above in 1976 which also caused many heat related deaths. The previous record high was in August 1911 at the Greenwich station. Droughts can also, occasionally, be a problem, especially in summer. Most recently in Summer 2018 and with much drier than average conditions prevailing from May to December. However, the most consecutive days without rain was 73 days in the spring of 1893. Winters are generally cool with little temperature variation. Heavy snow is rare but snow usually happens at least once each winter. Spring and autumn can be pleasant. As a large city, London has a considerable urban heat island effect, making the centre of London at times warmer than the suburbs and outskirts. This can be seen below when comparing London Heathrow, west of London, with the London Weather Centre. Although London and the British Isles have a reputation of frequent rainfall, London's average of of precipitation annually actually makes it drier than the global average. The absence of heavy winter rainfall leads to many climates around the Mediterranean having more annual precipitation than London. London's vast urban area is often described using a set of district names, such as Bloomsbury, Mayfair, Wembley and Whitechapel. These are either informal designations, reflect the names of villages that have been absorbed by sprawl, or are superseded administrative units such as parishes or former boroughs. Such names have remained in use through tradition, each referring to a local area with its own distinctive character, but without official boundaries. Since 1965 Greater London has been divided into 32 London boroughs in addition to the ancient City of London. The City of London is the main financial district, and Canary Wharf has recently developed into a new financial and commercial hub in the Docklands to the east. The West End is London's main entertainment and shopping district, attracting tourists. West London includes expensive residential areas where properties can sell for tens of millions of pounds. The average price for properties in Kensington and Chelsea is over £2 million with a similarly high outlay in most of central London. The East End is the area closest to the original Port of London, known for its high immigrant population, as well as for being one of the poorest areas in London. The surrounding East London area saw much of London's early industrial development; now, brownfield sites throughout the area are being redeveloped as part of the Thames Gateway including the London Riverside and Lower Lea Valley, which was developed into the Olympic Park for the 2012 Olympics and Paralympics. London's buildings are too diverse to be characterised by any particular architectural style, partly because of their varying ages. Many grand houses and public buildings, such as the National Gallery, are constructed from Portland stone. Some areas of the city, particularly those just west of the centre, are characterised by white stucco or whitewashed buildings. Few structures in central London pre-date the Great Fire of 1666, these being a few trace Roman remains, the Tower of London and a few scattered Tudor survivors in the City. Further out is, for example, the Tudor-period Hampton Court Palace, England's oldest surviving Tudor palace, built by Cardinal Thomas Wolsey 1515. Part of the varied architectural heritage are the 17th-century churches by Wren, neoclassical financial institutions such as the Royal Exchange and the Bank of England, to the early 20th century Old Bailey and the 1960s Barbican Estate. The disused – but soon to be rejuvenated – 1939 Battersea Power Station by the river in the south-west is a local landmark, while some railway termini are excellent examples of Victorian architecture, most notably St. Pancras and Paddington. The density of London varies, with high employment density in the central area and Canary Wharf, high residential densities in inner London, and lower densities in Outer London. The Monument in the City of London provides views of the surrounding area while commemorating the Great Fire of London, which originated nearby. Marble Arch and Wellington Arch, at the north and south ends of Park Lane, respectively, have royal connections, as do the Albert Memorial and Royal Albert Hall in Kensington. Nelson's Column is a nationally recognised monument in Trafalgar Square, one of the focal points of central London. Older buildings are mainly brick built, most commonly the yellow London stock brick or a warm orange-red variety, often decorated with carvings and white plaster mouldings. In the dense areas, most of the concentration is via medium- and high-rise buildings. London's skyscrapers, such as 30 St Mary Axe, Tower 42, the Broadgate Tower and One Canada Square, are mostly in the two financial districts, the City of London and Canary Wharf. High-rise development is restricted at certain sites if it would obstruct protected views of St Paul's Cathedral and other historic buildings. Nevertheless, there are a number of tall skyscrapers in central London (see Tall buildings in London), including the 95-storey Shard London Bridge, the tallest building in the United Kingdom. Other notable modern buildings include City Hall in Southwark with its distinctive oval shape, the Art Deco BBC Broadcasting House plus the Postmodernist British Library in Somers Town/Kings Cross and No 1 Poultry by James Stirling. What was formerly the Millennium Dome, by the Thames to the east of Canary Wharf, is now an entertainment venue called the O2 Arena. The London Natural History Society suggest that London is "one of the World's Greenest Cities" with more than 40 per cent green space or open water. They indicate that 2000 species of flowering plant have been found growing there and that the tidal Thames supports 120 species of fish. They also state that over 60 species of bird nest in central London and that their members have recorded 47 species of butterfly, 1173 moths and more than 270 kinds of spider around London. London's wetland areas support nationally important populations of many water birds. London has 38 Sites of Special Scientific Interest (SSSIs), two national nature reserves and 76 local nature reserves. Amphibians are common in the capital, including smooth newts living by the Tate Modern, and common frogs, common toads, palmate newts and great crested newts. On the other hand, native reptiles such as slowworms, common lizards, grass snakes and adders, are mostly only seen in Outer London. Among other inhabitants of London are 10,000 red foxes, so that there are now 16 foxes for every square mile (2.6 square kilometres) of London. These urban foxes are noticeably bolder than their country cousins, sharing the pavement with pedestrians and raising cubs in people's backyards. Foxes have even sneaked into the Houses of Parliament, where one was found asleep on a filing cabinet. Another broke into the grounds of Buckingham Palace, reportedly killing some of Queen ElizabethII's prized pink flamingos. Generally, however, foxes and city folk appear to get along. A survey in 2001 by the London-based Mammal Society found that 80 per cent of 3,779 respondents who volunteered to keep a diary of garden mammal visits liked having them around. This sample cannot be taken to represent Londoners as a whole. Other mammals found in Greater London are hedgehogs, rats, mice, rabbit, shrew, vole, and squirrels. In wilder areas of Outer London, such as Epping Forest, a wide variety of mammals are found, including hare, badger, field, bank and water vole, wood mouse, yellow-necked mouse, mole, shrew, and weasel, in addition to fox, squirrel and hedgehog. A dead otter was found at The Highway, in Wapping, about a mile from the Tower Bridge, which would suggest that they have begun to move back after being absent a hundred years from the city. Ten of England's eighteen species of bats have been recorded in Epping Forest: soprano, nathusius and common pipistrelles, noctule, serotine, barbastelle, daubenton's, brown Long-eared, natterer's and leisler's. Among the strange sights seen in London have been a whale in the Thames, while the BBC Two programme "Natural World: Unnatural History of London" shows pigeons using the London Underground to get around the city, a seal that takes fish from fishmongers outside Billingsgate Fish Market, and foxes that will "sit" if given sausages. Herds of red and fallow deer also roam freely within much of Richmond and Bushy Park. A cull takes place each November and February to ensure numbers can be sustained. Epping Forest is also known for its fallow deer, which can frequently be seen in herds to the north of the Forest. A rare population of melanistic, black fallow deer is also maintained at the Deer Sanctuary near Theydon Bois. Muntjac deer, which escaped from deer parks at the turn of the twentieth century, are also found in the forest. While Londoners are accustomed to wildlife such as birds and foxes sharing the city, more recently urban deer have started becoming a regular feature, and whole herds of fallow deer come into residential areas at night to take advantage of London's green spaces. The 2011 census recorded that 2,998,264 people or 36.7% of London's population are foreign-born making London the city with the second largest immigrant population, behind New York City, in terms of absolute numbers. About 69% of children born in London in 2015 had at least one parent who was born abroad. The table to the right shows the most common countries of birth of London residents. Note that some of the German-born population, in 18th position, are British citizens from birth born to parents serving in the British Armed Forces in Germany. With increasing industrialisation, London's population grew rapidly throughout the 19th and early 20th centuries, and it was for some time in the late 19th and early 20th centuries the most populous city in the world. Its population peaked at 8,615,245 in 1939 immediately before the outbreak of the Second World War, but had declined to 7,192,091 at the 2001 Census. However, the population then grew by just over a million between the 2001 and 2011 Censuses, to reach 8,173,941 in the latter enumeration. However, London's continuous urban area extends beyond the borders of Greater London and was home to 9,787,426 people in 2011, while its wider metropolitan area has a population of between 12 and 14 million depending on the definition used. According to Eurostat, London is the most populous city and metropolitan area of the European Union and the second most populous in Europe. During the period 1991–2001 a net 726,000 immigrants arrived in London. The region covers an area of . The population density is , more than ten times that of any other . In terms of population, London is the 19th largest city and the 18th largest metropolitan region. Children (aged younger than 14 years) constitute 21 percent of the population in Outer London, and 28 percent in Inner London; the age group aged between 15 and 24 years is 12 percent in both Outer and Inner London; those aged between 25 and 44 years are 31 percent in Outer London and 40 percent in Inner London; those aged between 45 and 64 years form 26 percent and 21 percent in Outer and Inner London respectively; while in Outer London those aged 65 and older are 13 percent, though in Inner London just 9 percent. The median age of London in 2017 is 36.5 years old. According to the Office for National Statistics, based on the 2011 Census estimates, 59.8 per cent of the 8,173,941 inhabitants of London were White, with 44.9 per cent White British, 2.2 per cent White Irish, 0.1 per cent gypsy/Irish traveller and 12.1 per cent classified as Other White. 20.9 per cent of Londoners are of Asian and mixed-Asian descent. 19.7 per cent are of full Asian descent, with those of mixed-Asian heritage comprising 1.2 of the population. Indians account for 6.6 per cent of the population, followed by Pakistanis and Bangladeshis at 2.7 per cent each. Chinese peoples account for 1.5 per cent of the population, with Arabs comprising 1.3 per cent. A further 4.9 per cent are classified as "Other Asian". 15.6 per cent of London's population are of Black and mixed-Black descent. 13.3 per cent are of full Black descent, with those of mixed-Black heritage comprising 2.3 per cent. Black Africans account for 7.0 per cent of London's population, with 4.2 per cent as Black Caribbean and 2.1 per cent as "Other Black". 5.0 per cent are of mixed race. Across London, Black and Asian children outnumber White British children by about six to four in state schools. Altogether at the 2011 census, of London's 1,624,768 population aged 0 to 15, 46.4 per cent were White, 19.8 per cent were Asian, 19 per cent were Black, 10.8 per cent were Mixed and 4 per cent represented another ethnic group. In January 2005, a survey of London's ethnic and religious diversity claimed that there were more than 300 languages spoken in London and more than 50 non-indigenous communities with a population of more than 10,000. Figures from the Office for National Statistics show that, , London's foreign-born population was 2,650,000 (33 per cent), up from 1,630,000 in 1997. The 2011 census showed that 36.7 per cent of Greater London's population were born outside the UK. A portion of the German-born population are likely to be British nationals born to parents serving in the British Armed Forces in Germany. Estimates produced by the Office for National Statistics indicate that the five largest foreign-born groups living in London in the period July 2009 to June 2010 were those born in India, Poland, the Republic of Ireland, Bangladesh and Nigeria. According to the 2011 Census, the largest religious groupings are Christians (48.4 per cent), followed by those of no religion (20.7 per cent), Muslims (12.4 per cent), no response (8.5 per cent), Hindus (5.0 per cent), Jews (1.8 per cent), Sikhs (1.5 per cent), Buddhists (1.0 per cent) and other (0.6 per cent). London has traditionally been Christian, and has a large number of churches, particularly in the City of London. The well-known St Paul's Cathedral in the City and Southwark Cathedral south of the river are Anglican administrative centres, while the Archbishop of Canterbury, principal bishop of the Church of England and worldwide Anglican Communion, has his main residence at Lambeth Palace in the London Borough of Lambeth. Important national and royal ceremonies are shared between St Paul's and Westminster Abbey. The Abbey is not to be confused with nearby Westminster Cathedral, which is the largest Roman Catholic cathedral in England and Wales. Despite the prevalence of Anglican churches, observance is very low within the Anglican denomination. Church attendance continues on a long, slow, steady decline, according to Church of England statistics. London is also home to sizeable Muslim, Hindu, Sikh, and Jewish communities. Notable mosques include the East London Mosque in Tower Hamlets, which is allowed to give the Islamic call to prayer through loudspeakers, the London Central Mosque on the edge of Regent's Park and the Baitul Futuh of the Ahmadiyya Muslim Community. Following the oil boom, increasing numbers of wealthy Middle-Eastern Arab Muslims have based themselves around Mayfair, Kensington, and Knightsbridge in West London. There are large Bengali Muslim communities in the eastern boroughs of Tower Hamlets and Newham. Large Hindu communities are in the north-western boroughs of Harrow and Brent, the latter of which hosts what was, until 2006, Europe's largest Hindu temple, Neasden Temple. London is also home to 44 Hindu temples, including the BAPS Shri Swaminarayan Mandir London. There are Sikh communities in East and West London, particularly in Southall, home to one of the largest Sikh populations and the largest Sikh temple outside India. The majority of British Jews live in London, with significant Jewish communities in Stamford Hill, Stanmore, Golders Green, Finchley, Hampstead, Hendon and Edgware in North London. Bevis Marks Synagogue in the City of London is affiliated to London's historic Sephardic Jewish community. It is the only synagogue in Europe which has held regular services continuously for over 300 years. Stanmore and Canons Park Synagogue has the largest membership of any single Orthodox synagogue in the whole of Europe, overtaking Ilford synagogue (also in London) in 1998. The community set up the London Jewish Forum in 2006 in response to the growing significance of devolved London Government. The accent of a 21st-century Londoner varies widely; what is becoming more and more common amongst the under-30s however is some fusion of Cockney with a whole array of ethnic accents, in particular Caribbean, which help to form an accent labelled Multicultural London English (MLE). The other widely heard and spoken accent is RP (Received Pronunciation) in various forms, which can often be heard in the media and many of other traditional professions and beyond, although this accent is not limited to London and South East England, and can also be heard selectively throughout the whole UK amongst certain social groupings. Since the turn of the century the Cockney dialect is less common in the East End and has 'migrated' east to Havering and the county of Essex. London's gross regional product in 2018 was almost £500 billion, around a quarter of UK GDP. London has five major business districts: the City, Westminster, Canary Wharf, Camden & Islington and Lambeth & Southwark. One way to get an idea of their relative importance is to look at relative amounts of office space: Greater London had 27 million m2 of office space in 2001, and the City contains the most space, with 8 million m2 of office space. London has some of the highest real estate prices in the world. London is the world's most expensive office market for the last three years according to world property journal (2015) report. the residential property in London is worth $2.2 trillion – same value as that of Brazil's annual GDP. The city has the highest property prices of any European city according to the Office for National Statistics and the European Office of Statistics. On average the price per square metre in central London is €24,252 (April 2014). This is higher than the property prices in other G8 European capital cities; Berlin €3,306, Rome €6,188 and Paris €11,229. London's finance industry is based in the City of London and Canary Wharf, the two major business districts in London. London is one of the pre-eminent financial centres of the world as the most important location for international finance. London took over as a major financial centre shortly after 1795 when the Dutch Republic collapsed before the Napoleonic armies. For many bankers established in Amsterdam (e.g. Hope, Baring), this was only time to move to London. The London financial elite was strengthened by a strong Jewish community from all over Europe capable of mastering the most sophisticated financial tools of the time. This unique concentration of talents accelerated the transition from the Commercial Revolution to the Industrial Revolution. By the end of the 19th century, Britain was the wealthiest of all nations, and London a leading financial centre. Still, London tops the world rankings on the Global Financial Centres Index (GFCI), and it ranked second in A.T. Kearney's 2018 Global Cities Index. London's largest industry is finance, and its financial exports make it a large contributor to the UK's balance of payments. Around 325,000 people were employed in financial services in London until mid-2007. London has over 480 overseas banks, more than any other city in the world. It is also the world's biggest currency trading centre, accounting for some 37 per cent of the $5.1 trillion average daily volume, according to the BIS. Over 85 per cent (3.2 million) of the employed population of greater London works in the services industries. Because of its prominent global role, London's economy had been affected by the financial crisis of 2007–2008. However, by 2010 the City has recovered; put in place new regulatory powers, proceeded to regain lost ground and re-established London's economic dominance. Along with professional services headquarters, the City of London is home to the Bank of England, London Stock Exchange, and Lloyd's of London insurance market. Over half of the UK's top 100 listed companies (the FTSE 100) and over 100 of Europe's 500 largest companies have their headquarters in central London. Over 70 per cent of the FTSE 100 are within London's metropolitan area, and 75 per cent of Fortune 500 companies have offices in London. Media companies are concentrated in London and the media distribution industry is London's second most competitive sector. The BBC is a significant employer, while other broadcasters also have headquarters around the City. Many national newspapers are edited in London. London is a major retail centre and in 2010 had the highest non-food retail sales of any city in the world, with a total spend of around £64.2 billion. The Port of London is the second-largest in the United Kingdom, handling 45 million tonnes of cargo each year. A growing number of technology companies are based in London notably in East London Tech City, also known as Silicon Roundabout. In April 2014, the city was among the first to receive a geoTLD. In February 2014 London was ranked as the European City of the Future in the 2014/15 list by FDi Magazine. The gas and electricity distribution networks that manage and operate the towers, cables and pressure systems that deliver energy to consumers across the city are managed by National Grid plc, SGN and UK Power Networks. London is one of the leading tourist destinations in the world and in 2015 was ranked as the most visited city in the world with over 65 million visits. It is also the top city in the world by visitor cross-border spending, estimated at US$20.23 billion in 2015. Tourism is one of London's prime industries, employing the equivalent of 350,000 full-time workers in 2003, and the city accounts for 54% of all inbound visitor spending in the UK. London was the world top city destination as ranked by TripAdvisor users. In 2015 the top most-visited attractions in the UK were all in London. The top 10 most visited attractions were: (with visits per venue) The number of hotel rooms in London in 2015 stood at 138,769, and is expected to grow over the years. Transport is one of the four main areas of policy administered by the Mayor of London, however the mayor's financial control does not extend to the longer distance rail network that enters London. In 2007 he assumed responsibility for some local lines, which now form the London Overground network, adding to the existing responsibility for the London Underground, trams and buses. The public transport network is administered by Transport for London (TfL). The lines that formed the London Underground, as well as trams and buses, became part of an integrated transport system in 1933 when the London Passenger Transport Board or "London Transport" was created. Transport for London is now the statutory corporation responsible for most aspects of the transport system in Greater London, and is run by a board and a commissioner appointed by the Mayor of London. London is a major international air transport hub with the busiest city airspace in the world. Eight airports use the word "London" in their name, but most traffic passes through six of these. Additionally, various other airports also serve London, catering primarily to general aviation flights. The London Underground, commonly referred to as the Tube, is the oldest and third longest metro system in the world. The system serves 270 stations and was formed from several private companies, including the world's first underground electric line, the City and South London Railway. It dates from 1863. Over four million journeys are made every day on the Underground network, over 1 billion each year. An investment programme is attempting to reduce congestion and improve reliability, including £6.5 billion (€7.7 billion) spent before the 2012 Summer Olympics. The Docklands Light Railway (DLR), which opened in 1987, is a second, more local metro system using smaller and lighter tram-type vehicles that serve the Docklands, Greenwich and Lewisham. There are more than 360 railway stations in the London Travelcard Zones on an extensive above-ground suburban railway network. South London, particularly, has a high concentration of railways as it has fewer Underground lines. Most rail lines terminate around the centre of London, running into eighteen terminal stations, with the exception of the Thameslink trains connecting Bedford in the north and Brighton in the south via Luton and Gatwick airports. London has Britain's busiest station by number of passengers – Waterloo, with over 184 million people using the interchange station complex (which includes Waterloo East station) each year. is the busiest station in Europe by the number of trains passing. With the need for more rail capacity in London, Crossrail is expected to open in 2021. It will be a new railway line running east to west through London and into the Home Counties with a branch to Heathrow Airport. It is Europe's biggest construction project, with a £15 billion projected cost. London is the centre of the National Rail network, with 70 per cent of rail journeys starting or ending in London. Like suburban rail services, regional and inter-city trains depart from several termini around the city centre, linking London with the rest of Britain including Birmingham, Brighton, Bristol, Cambridge, Cardiff, Chester, Derby, Holyhead (for Dublin), Edinburgh, Exeter, Glasgow, Leeds, Liverpool, Nottingham, Manchester, Newcastle upon Tyne, Norwich, Reading, Sheffield, York. Some international railway services to Continental Europe were operated during the 20th century as boat trains, such as the "Admiraal de Ruijter" to Amsterdam and the "Night Ferry" to Paris and Brussels. The opening of the Channel Tunnel in 1994 connected London directly to the continental rail network, allowing Eurostar services to begin. Since 2007, high-speed trains link St. Pancras International with Lille, Calais, Paris, Disneyland Paris, Brussels, Amsterdam and other European tourist destinations via the High Speed 1 rail link and the Channel Tunnel. The first high-speed domestic trains started in June 2009 linking Kent to London. There are plans for a second high speed line linking London to the Midlands, North West England, and Yorkshire. Although rail freight levels are far down compared to their height, significant quantities of cargo are also carried into and out of London by rail; chiefly building materials and landfill waste. As a major hub of the British railway network, London's tracks also carry large amounts of freight for the other regions, such as container freight from the Channel Tunnel and English Channel ports, and nuclear waste for reprocessing at Sellafield. London's bus network runs 24 hours a day, with about 8,500 buses, more than 700 bus routes and around 19,500 bus stops. In 2013, the network had more than 2 billion commuter trips per year, more than the Underground. Around £850 million is taken in revenue each year. London has the largest wheelchair-accessible network in the world and, from the third quarter of 2007, became more accessible to hearing and visually impaired passengers as audio-visual announcements were introduced. London has a modern tram network, known as Tramlink, centred on Croydon in South London. The network has 39 stops and four routes, and carried 28 million people in 2013. Since June 2008, Transport for London has completely owned Tramlink. London's first only cable car is the Emirates Air Line, which opened in June 2012. The cable car crosses the River Thames, and links Greenwich Peninsula and the Royal Docks in the east of the city. It is integrated with London's Oyster Card ticketing system, although special fares are charged. It cost £60 million to build and carries more than 3,500 passengers every day. Similar to the Santander Cycles bike hire scheme, the cable car is sponsored in a 10-year deal by the airline Emirates. In the Greater London Area, around 650,000 people use a bike everyday. But out of a total population of around 8.8 million, this means that just around 7% of Greater London's population use a bike on an average day. This relatively low percentage of bicycle users may be due to the poor investments for cycling in London of about £110 million per year, equating to around £12 per person, which can be compared to £22 in the Netherlands. Cycling has become an increasingly popular way to get around London. The launch of a cycle hire scheme in July 2010 was successful and generally well received. The Port of London, once the largest in the world, is now only the second-largest in the United Kingdom, handling 45 million tonnes of cargo each year as of 2009. Most of this cargo passes through the Port of Tilbury, outside the boundary of Greater London. London has river boat services on the Thames known as Thames Clippers, which offers both commuter and tourist boat services. These run every 20 minutes between Embankment Pier and North Greenwich Pier. The Woolwich Ferry, with 2.5 million passengers every year, is a frequent service linking the North and South Circular Roads. Although the majority of journeys in central London are made by public transport, car travel is common in the suburbs. The inner ring road (around the city centre), the North and South Circular roads (in the suburbs), and the outer orbital motorway (the M25, outside the built-up area) encircle the city and are intersected by a number of busy radial routes—but very few motorways penetrate into inner London. A plan for a comprehensive network of motorways throughout the city (the Ringways Plan) was prepared in the 1960s but was mostly cancelled in the early 1970s. The M25 is the second-longest ring-road motorway in Europe at long. The A1 and M1 connect London to Leeds, and Newcastle and Edinburgh. London is notorious for its traffic congestion; in 2009, the average speed of a car in the rush hour was recorded at . In 2003, a congestion charge was introduced to reduce traffic volumes in the city centre. With a few exceptions, motorists are required to pay to drive within a defined zone encompassing much of central London. Motorists who are residents of the defined zone can buy a greatly reduced season pass. The London government initially expected the Congestion Charge Zone to increase daily peak period Underground and bus users, reduce road traffic, increase traffic speeds, and reduce queues; however, the increase in private for hire vehicles has affected these expectations. Over the course of several years, the average number of cars entering the centre of London on a weekday was reduced from 195,000 to 125,000 cars – a 35-per-cent reduction of vehicles driven per day. London is a major global centre of higher education teaching and research and has the largest concentration of higher education institutes in Europe. According to the QS World University Rankings 2015/16, London has the greatest concentration of top class universities in the world and its international student population of around 110,000 is larger than any other city in the world. A 2014 PricewaterhouseCoopers report termed London the global capital of higher education. A number of world-leading education institutions are based in London. In the 2014/15 "QS World University Rankings", Imperial College London is ranked joint-second in the world, University College London (UCL) is ranked fifth, and King's College London (KCL) is ranked 16th. The London School of Economics has been described as the world's leading social science institution for both teaching and research. The London Business School is considered one of the world's leading business schools and in 2015 its MBA programme was ranked second-best in the world by the "Financial Times". The city is also home to three of the world’s top ten performing arts schools (as ranked by the 2020 QS World University Rankings): the Royal College of Music (ranking 2nd in the world), the Royal Academy of Music (ranking 4th) and the Guildhall School of Music and Drama (ranking 6th). With 120,000 students in London, the federal University of London is the largest contact teaching university in the UK. It includes five multi-faculty universities – City, King's College London, Queen Mary, Royal Holloway and UCL – and a number of smaller and more specialised institutions including Birkbeck, the Courtauld Institute of Art, Goldsmiths, the London Business School, the London School of Economics, the London School of Hygiene & Tropical Medicine, the Royal Academy of Music, the Central School of Speech and Drama, the Royal Veterinary College and the School of Oriental and African Studies. Members of the University of London have their own admissions procedures, and some award their own degrees. A number of universities in London are outside the University of London system, including Brunel University, Imperial College London, Kingston University, London Metropolitan University, University of East London, University of West London, University of Westminster, London South Bank University, Middlesex University, and University of the Arts London (the largest university of art, design, fashion, communication and the performing arts in Europe). In addition there are three international universities in London – Regent's University London, Richmond, The American International University in London and Schiller International University. London is home to five major medical schools – Barts and The London School of Medicine and Dentistry (part of Queen Mary), King's College London School of Medicine (the largest medical school in Europe), Imperial College School of Medicine, UCL Medical School and St George's, University of London – and has many affiliated teaching hospitals. It is also a major centre for biomedical research, and three of the UK's eight academic health science centres are based in the city – Imperial College Healthcare, King's Health Partners and UCL Partners (the largest such centre in Europe). Additionally, many biomedical and biotechnology spin out companies from these research institutions are based around the city, most prominently in White City. There are a number of business schools in London, including the London School of Business and Finance, Cass Business School (part of City University London), Hult International Business School, ESCP Europe, European Business School London, Imperial College Business School, the London Business School and the UCL School of Management. London is also home to many specialist arts education institutions, including the Academy of Live and Recorded Arts, Central School of Ballet, LAMDA, London College of Contemporary Arts (LCCA), London Contemporary Dance School, National Centre for Circus Arts, RADA, Rambert School of Ballet and Contemporary Dance, the Royal College of Art and Trinity Laban. The majority of primary and secondary schools and further-education colleges in London are controlled by the London boroughs or otherwise state-funded; leading examples include Ashbourne College, Bethnal Green Academy, Brampton Manor Academy, City and Islington College, City of Westminster College, David Game College, Ealing, Hammersmith and West London College, Leyton Sixth Form College, London Academy of Excellence, Tower Hamlets College, and Newham Collegiate Sixth Form Centre. There are also a number of private schools and colleges in London, some old and famous, such as City of London School, Harrow, St Paul's School, Haberdashers' Aske's Boys' School, University College School, The John Lyon School, Highgate School and Westminster School. Leisure is a major part of the London economy, with a 2003 report attributing a quarter of the entire UK leisure economy to London at 25.6 events per 1000 people. Globally the city is amongst the big four fashion capitals of the world, and according to official statistics, London is the world's third-busiest film production centre, presents more live comedy than any other city, and has the biggest theatre audience of any city in the world. Within the City of Westminster in London, the entertainment district of the West End has its focus around Leicester Square, where London and world film premieres are held, and Piccadilly Circus, with its giant electronic advertisements. London's theatre district is here, as are many cinemas, bars, clubs, and restaurants, including the city's Chinatown district (in Soho), and just to the east is Covent Garden, an area housing speciality shops. The city is the home of Andrew Lloyd Webber, whose musicals have dominated the West End theatre since the late 20th century. The United Kingdom's Royal Ballet, English National Ballet, Royal Opera, and English National Opera are based in London and perform at the Royal Opera House, the London Coliseum, Sadler's Wells Theatre, and the Royal Albert Hall, as well as touring the country. Islington's long Upper Street, extending northwards from Angel, has more bars and restaurants than any other street in the United Kingdom. Europe's busiest shopping area is Oxford Street, a shopping street nearly long, making it the longest shopping street in the UK. Oxford Street is home to vast numbers of retailers and department stores, including the world-famous Selfridges flagship store. Knightsbridge, home to the equally renowned Harrods department store, lies to the south-west. London is home to designers Vivienne Westwood, Galliano, Stella McCartney, Manolo Blahnik, and Jimmy Choo, among others; its renowned art and fashion schools make it an international centre of fashion alongside Paris, Milan, and New York City. London offers a great variety of cuisine as a result of its ethnically diverse population. Gastronomic centres include the Bangladeshi restaurants of Brick Lane and the Chinese restaurants of Chinatown. There is a variety of annual events, beginning with the relatively new New Year's Day Parade, a fireworks display at the London Eye; the world's second largest street party, the Notting Hill Carnival, is held on the late August Bank Holiday each year. Traditional parades include November's Lord Mayor's Show, a centuries-old event celebrating the annual appointment of a new Lord Mayor of the City of London with a procession along the streets of the City, and June's Trooping the Colour, a formal military pageant performed by regiments of the Commonwealth and British armies to celebrate the Queen's Official Birthday. The Boishakhi Mela is a Bengali New Year festival celebrated by the British Bangladeshi community. It is the largest open-air Asian festival in Europe. After the Notting Hill Carnival, it is the second-largest street festival in the United Kingdom attracting over 80,000 visitors from across the country. London has been the setting for many works of literature. The pilgrims in Geoffrey Chaucer's late 14th-century "Canterbury Tales" set out for Canterbury from London – specifically, from the Tabard inn, Southwark. William Shakespeare spent a large part of his life living and working in London; his contemporary Ben Jonson was also based there, and some of his work, most notably his play "The Alchemist", was set in the city. "A Journal of the Plague Year" (1722) by Daniel Defoe is a fictionalisation of the events of the 1665 Great Plague. The literary centres of London have traditionally been hilly Hampstead and (since the early 20th century) Bloomsbury. Writers closely associated with the city are the diarist Samuel Pepys, noted for his eyewitness account of the Great Fire; Charles Dickens, whose representation of a foggy, snowy, grimy London of street sweepers and pickpockets has been a major influence on people's vision of early Victorian London; and Virginia Woolf, regarded as one of the foremost modernist literary figures of the 20th century. Later important depictions of London from the 19th and early 20th centuries are Dickens' novels, and Arthur Conan Doyle's Sherlock Holmes stories. Also of significance is Letitia Elizabeth Landon's "Calendar of the London Seasons" (1834). Modern writers pervasively influenced by the city include Peter Ackroyd, author of a "biography" of London, and Iain Sinclair, who writes in the genre of psychogeography. London has played a significant role in the film industry. Major studios within or bordering London include Twickenham, Ealing, Shepperton, Pinewood, Elstree and Borehamwood, and a special effects and post-production community centred in Soho. Working Title Films has its headquarters in London. London has been the setting for films including "Oliver Twist" (1948), "Scrooge" (1951), "Peter Pan" (1953), "The 101 Dalmatians" (1961), "My Fair Lady" (1964), "Mary Poppins" (1964), "Blowup" (1966), "The Long Good Friday" (1980), "The Great Mouse Detective" (1986), "Notting Hill" (1999), "Love Actually" (2003), "V For Vendetta" (2005), "" (2008) and "The King's Speech" (2010). Notable actors and filmmakers from London include; Charlie Chaplin, Alfred Hitchcock, Michael Caine, Helen Mirren, Gary Oldman, Christopher Nolan, Jude Law, Benedict Cumberbatch, Tom Hardy, Keira Knightley and Daniel Day-Lewis. , the British Academy Film Awards have taken place at the Royal Opera House. London is a major centre for television production, with studios including BBC Television Centre, The Fountain Studios and The London Studios. Many television programmes have been set in London, including the popular television soap opera "EastEnders", broadcast by the BBC since 1985. London is home to many museums, galleries, and other institutions, many of which are free of admission charges and are major tourist attractions as well as playing a research role. The first of these to be established was the British Museum in Bloomsbury, in 1753. Originally containing antiquities, natural history specimens, and the national library, the museum now has 7 million artefacts from around the globe. In 1824, the National Gallery was founded to house the British national collection of Western paintings; this now occupies a prominent position in Trafalgar Square. The British Library is one of the largest libraries in the world, and the national library of the United Kingdom. There are many other research libraries, including the Wellcome Library and Dana Centre, as well as university libraries, including the British Library of Political and Economic Science at LSE, the Central Library at Imperial, the Maughan Library at King's, and the Senate House Libraries at the University of London. In the latter half of the 19th century the locale of South Kensington was developed as "Albertopolis", a cultural and scientific quarter. Three major national museums are there: the Victoria and Albert Museum (for the applied arts), the Natural History Museum, and the Science Museum. The National Portrait Gallery was founded in 1856 to house depictions of figures from British history; its holdings now comprise the world's most extensive collection of portraits. The national gallery of British art is at Tate Britain, originally established as an annexe of the National Gallery in 1897. The Tate Gallery, as it was formerly known, also became a major centre for modern art; in 2000, this collection moved to Tate Modern, a new gallery housed in the former Bankside Power Station. London is one of the major classical and popular music capitals of the world and hosts major music corporations, such as Universal Music Group International and Warner Music Group, as well as countless bands, musicians and industry professionals. The city is also home to many orchestras and concert halls, such as the Barbican Arts Centre (principal base of the London Symphony Orchestra and the London Symphony Chorus), Cadogan Hall (Royal Philharmonic Orchestra) and the Royal Albert Hall (The Proms). London's two main opera houses are the Royal Opera House and the London Coliseum. The UK's largest pipe organ is at the Royal Albert Hall. Other significant instruments are at the cathedrals and major churches. Several conservatoires are within the city: Royal Academy of Music, Royal College of Music, Guildhall School of Music and Drama and Trinity Laban. London has numerous venues for rock and pop concerts, including the world's busiest indoor venue, The O2 Arena and Wembley Arena, as well as many mid-sized venues, such as Brixton Academy, the Hammersmith Apollo and the Shepherd's Bush Empire. Several music festivals, including the Wireless Festival, South West Four, Lovebox, and Hyde Park's British Summer Time are all held in London. The city is home to the original Hard Rock Cafe and the Abbey Road Studios, where The Beatles recorded many of their hits. In the 1960s, 1970s and 1980s, musicians and groups like Elton John, Pink Floyd, Cliff Richard, David Bowie, Queen, The Kinks, The Rolling Stones, The Who, Eric Clapton, Led Zeppelin, The Small Faces, Iron Maiden, Fleetwood Mac, Elvis Costello, Cat Stevens, The Police, The Cure, Madness, The Jam, Ultravox, Spandau Ballet, Culture Club, Dusty Springfield, Phil Collins, Rod Stewart, Adam Ant, Status Quo and Sade, derived their sound from the streets and rhythms of London. London was instrumental in the development of punk music, with figures such as the Sex Pistols, The Clash, and Vivienne Westwood all based in the city. More recent artists to emerge from the London music scene include George Michael's Wham!, Kate Bush, Seal, the Pet Shop Boys, Bananarama, Siouxsie and the Banshees, Bush, the Spice Girls, Jamiroquai, Blur, McFly, The Prodigy, Gorillaz, Bloc Party, Mumford & Sons, Coldplay, Amy Winehouse, Adele, Sam Smith, Ed Sheeran, Paloma Faith, Ellie Goulding, One Direction and Florence and the Machine. London is also a centre for urban music. In particular the genres UK garage, drum and bass, dubstep and grime evolved in the city from the foreign genres of hip hop and reggae, alongside local drum and bass. Music station BBC Radio 1Xtra was set up to support the rise of local urban contemporary music both in London and in the rest of the United Kingdom. A 2013 report by the City of London Corporation said that London is the "greenest city" in Europe with 35,000 acres of public parks, woodlands and gardens. The largest parks in the central area of London are three of the eight Royal Parks, namely Hyde Park and its neighbour Kensington Gardens in the west, and Regent's Park to the north. Hyde Park in particular is popular for sports and sometimes hosts open-air concerts. Regent's Park contains London Zoo, the world's oldest scientific zoo, and is near Madame Tussauds Wax Museum. Primrose Hill, immediately to the north of Regent's Park, at is a popular spot from which to view the city skyline. Close to Hyde Park are smaller Royal Parks, Green Park and St. James's Park. A number of large parks lie outside the city centre, including Hampstead Heath and the remaining Royal Parks of Greenwich Park to the southeast and Bushy Park and Richmond Park (the largest) to the southwest, Hampton Court Park is also a royal park, but, because it contains a palace, it is administered by the Historic Royal Palaces, unlike the eight Royal Parks. Close to Richmond Park is Kew Gardens which has the world's largest collection of living plants. In 2003, the gardens were put on the UNESCO list of World Heritage Sites. There are also parks administered by London's borough Councils, including Victoria Park in the East End and Battersea Park in the centre. Some more informal, semi-natural open spaces also exist, including the Hampstead Heath of North London, and Epping Forest, which covers 2,476 hectares (6,118 acres) in the east. Both are controlled by the City of London Corporation. Hampstead Heath incorporates Kenwood House, a former stately home and a popular location in the summer months when classical musical concerts are held by the lake, attracting thousands of people every weekend to enjoy the music, scenery and fireworks. Epping Forest is a popular venue for various outdoor activities, including mountain biking, walking, horse riding, golf, angling, and orienteering. Walking is a popular recreational activity in London. Areas that provide for walks include Wimbledon Common, Epping Forest, Hampton Court Park, Hampstead Heath, the eight Royal Parks, canals and disused railway tracks. Access to canals and rivers has improved recently, including the creation of the Thames Path, some of which is within Greater London, and The Wandle Trail; this runs through South London along the River Wandle, a tributary of the River Thames. Other long distance paths, linking green spaces, have also been created, including the Capital Ring, the Green Chain Walk, London Outer Orbital Path ("Loop"), Jubilee Walkway, Lea Valley Walk, and the Diana, Princess of Wales Memorial Walk. London has hosted the Summer Olympics three times: in 1908, 1948, and 2012, making it the first city to host the modern Games three times. The city was also the host of the British Empire Games in 1934. In 2017, London hosted the World Championships in Athletics for the first time. London's most popular sport is football and it has five clubs in the English Premier League as of the 2019–20 season: Arsenal, Chelsea, Crystal Palace, Tottenham Hotspur, and West Ham United. Other professional teams in London are Fulham, Queens Park Rangers, Brentford, Millwall, Charlton Athletic, AFC Wimbledon, Leyton Orient, Barnet, Sutton United, Bromley and Dagenham & Redbridge. From 1924, the original Wembley Stadium was the home of the English national football team. It hosted the 1966 FIFA World Cup Final, with England defeating West Germany, and served as the venue for the FA Cup Final as well as rugby league's Challenge Cup final. The new Wembley Stadium serves exactly the same purposes and has a capacity of 90,000. Two Aviva Premiership rugby union teams are based in London, Saracens and Harlequins. London Scottish, London Welsh and London Irish play in the RFU Championship club and other rugby union clubs in the city include Richmond F.C., Rosslyn Park F.C., Westcombe Park R.F.C. and Blackheath F.C.. Twickenham Stadium in south-west London hosts home matches for the England national rugby union team and has a capacity of 82,000 now that the new south stand has been completed. While rugby league is more popular in the north of England, there are two professional rugby league clubs in London – the London Broncos in the second-tier RFL Championship, who play at the Trailfinders Sports Ground in West Ealing, and the third-tier League 1 team, the London Skolars from Wood Green, Haringey. One of London's best-known annual sports competitions is the Wimbledon Tennis Championships, held at the All England Club in the south-western suburb of Wimbledon. Played in late June to early July, it is the oldest tennis tournament in the world, and widely considered the most prestigious. London has two Test cricket grounds, Lord's (home of Middlesex C.C.C.) in St John's Wood and the Oval (home of Surrey C.C.C.) in Kennington. Lord's has hosted four finals of the Cricket World Cup, and is known as the "Home of Cricket". Other key events are the annual mass-participation London Marathon, in which some 35,000 runners attempt a course around the city, and the University Boat Race on the River Thames from Putney to Mortlake.
https://en.wikipedia.org/wiki?curid=17867
Lagâri Hasan Çelebi Lagâri Hasan Çelebi was an Ottoman aviator who, according to the account written by traveller Evliya Çelebi, made a successful manned rocket flight. Evliya Çelebi reported that in 1633, Lagari Hasan Çelebi launched in a 7-winged rocket using 50 okka (140 lbs) of gunpowder from Sarayburnu, the point below Topkapı Palace in Istanbul. The flight was said to be undertaken at the time of the birth of sultan Murad IV's daughter. As Evliya Celebi wrote, Lagari proclaimed before launch "O my sultan! Be blessed, I am going to talk to Jesus!"; after ascending in the rocket, he landed in the sea, swimming ashore and joking "O my sultan! Jesus sends his regards to you!"; he was rewarded by the Sultan with silver and the rank of sipahi in the Ottoman army. Evliya Çelebi also wrote of Lagari's brother, Hezârfen Ahmed Çelebi, making a flight by glider a year earlier. "Istanbul Beneath My Wings" is a 1996 film about the lives of Lagari Hasan Çelebi, his brother and fellow aviator Hezârfen Ahmed Çelebi, and Ottoman society in the early 17th century as witnessed and narrated by Evliya Çelebi. The legend was addressed in an experiment by the television show "MythBusters", on November 11, 2009, in the episode "Crash and Burn"; however, the team noted that Evliya Çelebi had not sufficiently specified the alleged design used by Lagâri Hasan and said that it would have been "extremely difficult" for a 17th-century figure, unequipped with modern steel alloys and welding, to land safely or even achieve thrust at all. Although the re-imagined rocket rose, it exploded midflight.
https://en.wikipedia.org/wiki?curid=17869
Letterboxing (filming) Letterboxing is the practice of transferring film shot in a widescreen aspect ratio to standard-width video formats while preserving the film's original aspect ratio. The resulting videographic image has mattes (black bars) above and below it; these mattes are part of the image (i.e., of each frame of the video signal). "LBX" or "LTBX" are the identifying abbreviations for films and images so formatted. The first use of letterbox in consumer video appeared with the RCA Capacitance Electronic Disc (CED) videodisc format. Initially, letterboxing was limited to several key sequences of a film such as opening and closing credits, but was later used for entire films. The first fully letterboxed CED release was "Amarcord" in 1984, and several others followed including "The Long Goodbye", "Monty Python and the Holy Grail" and "The King of Hearts". Each disc contains a label noting the use of "RCA's innovative wide-screen mastering technique." The term "SmileBox" is a registered trademark used to describe a type of letterboxing for Cinerama films, such as on the Blu-ray release of "How the West Was Won". The image is produced with 3D mapping technology to approximate a curved screen. Digital broadcasting allows 1.78:1 (16:9) widescreen format transmissions without losing resolution, and thus widescreen is becoming the television norm. Most television channels in Europe are broadcasting standard-definition programming in , while in the United States, these are downscaled to letterbox. When using a 4:3 television, it is possible to display such programming in either a letterbox format or in a 4:3 centre-cut format (where the edges of the picture are lost). A letterboxed compromise ratio was often broadcast in analogue transmissions in European countries making the transition from 4:3 to 16:9. In addition, recent years have seen an increase of "fake" letterbox mattes on television to give the impression of a cinema film, often seen in adverts, trailers or television programmes such as "Top Gear". Current high-definition television (HDTV) systems use video displays with a wider aspect ratio than older television sets, making it easier to accurately display widescreen films. In addition to films produced for the cinema, some television programming is produced in high definition and therefore widescreen. On a widescreen television set, a 1.78:1 image fills the screen; however, 2.39:1 aspect ratio films are letterboxed with narrow mattes. Because the 1.85:1 aspect ratio does not match the 1.78:1 (16:9) aspect ratio of widescreen DVDs and high-definition video, slight letterboxing occurs. Usually, such matting of 1.85:1 film is eliminated to match the 1.78:1 aspect ratio in the DVD and HD image transference. Letterbox mattes are not necessarily black. IBM has used blue mattes for many of their TV ads, yellow mattes in their "I am Superman" Lotus ads, and green mattes in ads about efficiency & environmental sustainability. Others uses of colored mattes appear in ads from Allstate, Aleve, and Kodak among others, and in music videos such as Zebrahead's Playmate of the Year. In other instances mattes are animated, such as in the music video for "Never Gonna Stop (The Red Red Kroovy)", and even parodied such as the final scene of the Crazy Frog Axel F music video in which Crazy Frog peeks over the matte on the lower edge of the screen with part of his hands overlapping the matte. Similar to breaking the border of a comic's panel, it is a form of breaking the fourth wall. The table below shows which TV lines will contain picture information when letterbox pictures are displayed on either 4:3 or 16:9 screens. Pillarboxing (reversed letterboxing) is the display of an image within a wider image frame by adding lateral mattes (vertical bars at the sides); for example, a 1.33:1 image has lateral mattes when displayed on a 16:9 aspect ratio television screen. An alternative to pillarboxing is "tilt-and-scan" (reversed pan and scan), horizontally matting the original 1.33:1 television images to the 1.78:1 aspect ratio, which at any given moment crops part of the top and/or bottom of the frame, hence the need for the "tilt" component. A tilt is a camera move in which the camera tilts up or down. Windowboxing occurs when an image appears centered in a television screen, with blank space on all four sides of the image, such as when a widescreen image that has been previously letterboxed to fit 1.33:1 is then pillarboxed to fit 16:9. It is also called "matchbox", "gutterbox", and "postage stamp" display. This occurs on the DVD editions of the "Star Trek" films on a 4:3 television when the included widescreen documentaries show footage from the original television series. It is also seen in "", which displays widescreen pillarboxing with 1.85:1 scenes in a 2.40:1 frame that is subsequently letterboxed. It is common to see windowboxed commercials on HD television networks, because many commercials are shot in 16:9 but distributed to networks in SD, letterboxed to fit 1.33:1. Many 1980s 8-bit home computers feature gutterboxing display mode, because the TV screens normally used as monitors at that time tended to distort the image near the border of the screen to such an extent that text displayed in that area became illegible. Moreover, due to the overscanned nature of television video, the precise edges of the visible area of the screen varied from television set to television set, so characters near the expected border of the active screen area might be behind the bezel or off the edge of the screen. The Commodore 64, VIC-20, and Commodore 128 (in 40-column mode) featured coloured gutterboxing of the main text window, while the Atari 8-bit family featured a blue text window with a black border. The original IBM PC CGA display adapter was the same, and the monochrome MDA, the predecessor of the CGA, as well as the later EGA and VGA, also featured gutterboxing; this is also called "underscanned" video. The Fisher-Price PXL-2000 camcorder of the late 1980s recorded a windowboxed image to compensate partially for low resolution. Occasionally, an image is deliberately windowboxed for stylistic effect; for example, the documentary-style sequence of the film "Rent" suggest an older-format camera representing the 4:3 aspect ratio, and the opening sequence of the Oliver Stone film "JFK" features pillar boxing to represent the 1960s era 4:3 television footage. The film "Sneakers" uses a windowboxing effect in a scene for dramatic effect.
https://en.wikipedia.org/wiki?curid=17871
Ligament A ligament is the fibrous connective tissue that connects bones to other bones. It is also known as "articular ligament", "articular larua", "fibrous ligament", or "true ligament". Other ligaments in the body include the: Ligaments are similar to tendons as they are all made of connective tissue. The differences in them are in the connections that they make: ligaments connect one bone to another bone, tendons connect muscle to bone, and fasciae connect muscles to other muscles. These are all found in the skeletal system of the human body. Ligaments cannot usually be regenerated naturally; however, there are periodontal ligament stem cells located near the periodontal ligament which are involved in the adult regeneration of periodontist ligament. The study of ligaments is known as "Ligament" most commonly refers to a band of dense regular connective tissue bundles made of collagenous fibers, with bundles protected by dense irregular connective tissue sheaths. Ligaments connect bones to other bones to form joints, while tendons connect bone to muscle. Some ligaments limit the mobility of articulations or prevent certain movements altogether. Capsular ligaments are part of the articular capsule that surrounds synovial joints. They act as mechanical reinforcements. Extra-capsular ligaments join together in harmony with the other ligaments and provide joint stability. Intra-capsular ligaments, which are much less common, also provide stability but permit a far larger range of motion. Cruciate ligaments are paired ligaments in the form of a cross. Ligaments are viscoelastic. They gradually strain when under tension and return to their original shape when the tension is removed. However, they cannot retain their original shape when extended past a certain point or for a prolonged period of time. This is one reason why dislocated joints must be set as quickly as possible: if the ligaments lengthen too much, then the joint will be weakened, becoming prone to future dislocations. Athletes, gymnasts, dancers, and martial artists perform stretching exercises to lengthen their ligaments, making their joints more supple. The term hypermobility refers to people with more-elastic ligaments, allowing their joints to stretch and contort further; this is sometimes still called double-jointedness. The consequence of a broken ligament can be instability of the joint. Not all broken ligaments need surgery, but, if surgery is needed to stabilise the joint, the broken ligament can be repaired. Scar tissue may prevent this. If it is not possible to fix the broken ligament, other procedures such as the Brunelli procedure can correct the instability. Instability of a joint can over time lead to wear of the cartilage and eventually to osteoarthritis. One of the most often torn ligaments in the body is the anterior cruciate ligament (ACL). The ACL is one of the ligaments crucial to knee stability and persons who tear their ACL often seek to undergo reconstructive surgery, which can be done through a variety of techniques and materials. One of these techniques is the replacement of the ligament with an artificial material. An artificial ligament is a reinforcing material that is used to replace a torn ligament, such as the ACL. Artificial ligaments are a synthetic material composed of a polymer, such as polyacrylonitrile fiber, polypropylene, PET (polyethylene terephthalate), or polyNaSS poly(sodium styrene sulfonate). Certain folds of peritoneum are referred to as "ligaments". Examples include: Certain tubular structures from the fetal period are referred to as "ligaments" after they close up and turn into cord-like structures:
https://en.wikipedia.org/wiki?curid=17874
Loch Ness Monster The Loch Ness Monster, or Nessie (), is a cryptid in cryptozoology and Scottish folklore that is said to inhabit Loch Ness in the Scottish Highlands. It is often described as large, long-necked, and with one or more humps protruding from the water. Popular interest and belief in the creature have varied since it was brought to worldwide attention in 1933. Evidence of its existence is anecdotal, with a number of disputed photographs and sonar readings. The scientific community regards the Loch Ness Monster as a phenomenon without biological basis, explaining sightings as hoaxes, wishful thinking, and the misidentification of mundane objects. The creature has been affectionately called Nessie () since the 1940s. The first modern discussion of a sighting of a strange creature in the loch may have been in the 1870s, when D. Mackenzie claimed to have seen something "wriggling and churning up the water". This account was not published until 1934, however. Research indicates that several newspapers did publish items about a creature in the loch well before 1934. The best-known article that first attracted a great deal of attention about a creature was published on 2 May 1933 in "Inverness Courier", about a large "beast" or "whale-like fish". The article by Alex Campbell, water bailiff for Loch Ness and a part-time journalist, discussed a sighting by Aldie Mackay of an enormous creature with the body of a whale rolling in the water in the loch while she and her husband John were driving on the A82 on 15 April 1933. The word "monster" was reportedly applied for the first time in Campbell's article, although some reports claim that it was coined by editor Evan Barron. "The Courier" in 2017 published excerpts from the Campbell article, which had been titled "Strange Spectacle in Loch Ness". "The creature disported itself, rolling and plunging for fully a minute, its body resembling that of a whale, and the water cascading and churning like a simmering cauldron. Soon, however, it disappeared in a boiling mass of foam. Both onlookers confessed that there was something uncanny about the whole thing, for they realised that here was no ordinary denizen of the depths, because, apart from its enormous size, the beast, in taking the final plunge, sent out waves that were big enough to have been caused by a passing steamer." According to a 2013 article, Mackay said that she had yelled, "Stop! The Beast!" when viewing the spectacle. In the late 1980s, a naturalist interviewed Aldie Mackay and she admitted to knowing that there had been an oral tradition of a "beast" in the loch well before her claimed sighting. Alex Campbell's 1933 article also stated that "Loch Ness has for generations been credited with being the home of a fearsome-looking monster". On 4 August 1933 the "Courier" published a report of another alleged sighting. This one was claimed by Londoner George Spicer, the head of a firm of tailors. Several weeks earlier, while they were driving around the loch, he and his wife saw "the nearest approach to a dragon or pre-historic animal that I have ever seen in my life" trundling across the road toward the loch with "an animal" in its mouth. He described it as having "a long neck, which moved up and down in the manner of a scenic railway". He said the body "was fairly big, with a high back, but "if there were any feet they must have been of the web kind, and as for a tail I cannot say, as it moved so rapidly, and when we got to the spot it had probably disappeared into the loch". Letters began appearing in the "Courier", often anonymously, claiming land or water sightings by the writer, their family or acquaintances or remembered stories. The accounts reached the media, which described a "monster fish", "sea serpent", or "dragon" and eventually settled on "Loch Ness monster". Over the years various hoaxes were also perpetrated, usually "proven" by photographs that were later debunked. The earliest report of a monster in the vicinity of Loch Ness appears in the "Life of St. Columba" by Adomnán, written in the sixth century AD. According to Adomnán, writing about a century after the events described, Irish monk Saint Columba was staying in the land of the Picts with his companions when he encountered local residents burying a man by the River Ness. They explained that the man was swimming in the river when he was attacked by a "water beast" that mauled him and dragged him underwater. They had tried to rescue him in a boat but he was killed. Columba sent a follower, Luigne moccu Min, to swim across the river. The beast approached him, but Columba made the sign of the cross and said: "Go no further. Do not touch the man. Go back at once." The creature stopped as if it had been "pulled back with ropes" and fled, and Columba's men and the Picts gave thanks for what they perceived as a miracle. Believers in the monster point to this story, set in the River Ness rather than the loch itself, as evidence for the creature's existence as early as the sixth century. Sceptics question the narrative's reliability, noting that water-beast stories were extremely common in medieval hagiographies and Adomnán's tale probably recycles a common motif attached to a local landmark. According to sceptics, Adomnán's story may be independent of the modern Loch Ness Monster legend and became attached to it by believers seeking to bolster their claims. Ronald Binns considers that this is the most serious of various alleged early sightings of the monster, but all other claimed sightings before 1933 are dubious and do not prove a monster tradition before that date. Christopher Cairney uses a specific historical and cultural analysis of Adomnán to separate Adomnán's story about St. Columba from the modern myth of the Loch Ness Monster, but finds an earlier and culturally significant use of Celtic "water beast" folklore along the way. In doing so he also discredits any strong connection between kelpies or water-horses and the modern "media-augmented" creation of the Loch Ness Monster. He also concludes that the story of Saint Columba may have been impacted by earlier Irish myths about the Caoránach and an Oilliphéist. In October 1871 (or 1872), D. Mackenzie of Balnain reportedly saw an object resembling a log or an upturned boat "wriggling and churning up the water". The object moved slowly at first, disappearing at a faster speed. Mackenzie sent his story in a letter to Rupert Gould in 1934, shortly after popular interest in the monster increased. In 1888, mason Alexander Macdonald of Abriachan sighted "a large stubby-legged animal" surfacing from the loch and propelling itself within fifty yards of the shore where Macdonald stood. Macdonald reported his sighting to Loch Ness water bailiff Alex Campbell, and described the creature as looking like a salamander. Modern interest in the monster was sparked by a sighting on 22 July 1933, when George Spicer and his wife saw "a most extraordinary form of animal" cross the road in front of their car. They described the creature as having a large body (about high and long) and a long, wavy, narrow neck, slightly thicker than an elephant's trunk and as long as the width of the road. They saw no limbs. It lurched across the road toward the loch away, leaving a trail of broken undergrowth in its wake. It has been claimed that sightings of the monster increased after a road was built along the loch in early 1933, bringing workers and tourists to the formerly-isolated area. However, Binns has described this as "the myth of the lonely loch", as it was far from isolated before then, due to the construction of the Caledonian Canal. In the 1930s, the existing road by the side of the loch was given a serious upgrade. (Just possibly this work could have contributed to the legend, since there could have been tar barrels floating in the loch.) Hugh Gray's photograph taken near Foyers on 12 November 1933 was the first photograph alleged to depict the monster. It was slightly blurred, and it has been noted that if one looks closely the head of a dog can be seen. Gray had taken his Labrador for a walk that day and it is suspected that the photograph depicts his dog fetching a stick from the loch. Others have suggested that the photograph depicts an otter or a swan. The original negative was lost. However, in 1963, Maurice Burton came into "possession of two lantern slides, contact positives from th[e] original negative" and when projected onto a screen they revealed an "otter rolling at the surface in characteristic fashion." On 5 January 1934 a motorcyclist, Arthur Grant, claimed to have nearly hit the creature while approaching Abriachan (near the north-eastern end of the loch) at about 1 a.m. on a moonlit night. According to Grant, it had a small head attached to a long neck; the creature saw him, and crossed the road back to the loch. Grant, a veterinary student, described it as a cross between a seal and a plesiosaur. He said he dismounted and followed it to the loch, but saw only ripples. Grant produced a sketch of the creature that was examined by zoologist Maurice Burton, who stated it was consistent with the appearance and behaviour of an otter. Regarding the long size of the creature reported by Grant; it has been suggested that this was a faulty observation due to the poor light conditions. Palaeontologist Darren Naish has suggested that Grant may have seen either an otter or a seal and exaggerated his sighting over time. The "surgeon's photograph" is reportedly the first photo of the creature's head and neck. Supposedly taken by Robert Kenneth Wilson, a London gynaecologist, it was published in the "Daily Mail" on 21 April 1934. Wilson's refusal to have his name associated with it led to it being known as the "surgeon's photograph". According to Wilson, he was looking at the loch when he saw the monster, grabbed his camera and snapped four photos. Only two exposures came out clearly; the first reportedly shows a small head and back, and the second shows a similar head in a diving position. The first photo became well known, and the second attracted little publicity because of its blurriness. For 60 years the photo was considered evidence of the monster's existence, although sceptics dismissed it as driftwood, an elephant, an otter or a bird. The photo's scale was controversial; it is often shown cropped (making the creature seem large and the ripples like waves), while the uncropped shot shows the other end of the loch and the monster in the centre. The ripples in the photo were found to fit the size and pattern of small ripples, rather than large waves photographed up close. Analysis of the original image fostered further doubt. In 1993, the makers of the Discovery Communications documentary "Loch Ness Discovered" analysed the uncropped image and found a white object visible in every version of the photo (implying that it was on the negative). It was believed to be the cause of the ripples, as if the object was being towed, although the possibility of a blemish on the negative could not be ruled out. An analysis of the full photograph indicated that the object was small, about long. Since 1994, most agree that the photo was an elaborate hoax. It had been described as fake in a 7 December 1975 "Sunday Telegraph" article that fell into obscurity. Details of how the photo was taken were published in the 1999 book, "Nessie – the Surgeon's Photograph Exposed", which contains a facsimile of the 1975 "Sunday Telegraph" article. The creature was reportedly a toy submarine built by Christian Spurling, the son-in-law of Marmaduke Wetherell. Wetherell had been publicly ridiculed by his employer, the "Daily Mail", after he found "Nessie footprints" that turned out to be a hoax. To get revenge on the "Mail", Wetherell perpetrated his hoax with co-conspirators Spurling (sculpture specialist), Ian Wetherell (his son, who bought the material for the fake), and Maurice Chambers (an insurance agent). The toy submarine was bought from F. W. Woolworths, and its head and neck were made from wood putty. After testing it in a local pond the group went to Loch Ness, where Ian Wetherell took the photos near the Altsaigh Tea House. When they heard a water bailiff approaching, Duke Wetherell sank the model with his foot and it is "presumably still somewhere in Loch Ness". Chambers gave the photographic plates to Wilson, a friend of his who enjoyed "a good practical joke". Wilson brought the plates to Ogston's, an Inverness chemist, and gave them to George Morrison for development. He sold the first photo to the "Daily Mail", who then announced that the monster had been photographed. Little is known of the second photo; it is often ignored by researchers, who believe its quality too poor and its differences from the first photo too great to warrant analysis. It shows a head similar to the first photo, with a more turbulent wave pattern and possibly taken at a different time and location in the loch. Some believe it to be an earlier, cruder attempt at a hoax, and others (including Roy Mackal and Maurice Burton) consider it a picture of a diving bird or otter that Wilson mistook for the monster. According to Morrison, when the plates were developed Wilson was uninterested in the second photo; he allowed Morrison to keep the negative, and the photo was rediscovered years later. When asked about the second photo by the "Ness Information Service Newsletter", Spurling " ... was vague, thought it might have been a piece of wood they were trying out as a monster, but [was] not sure." On 29 May 1938, South African tourist G. E. Taylor filmed something in the loch for three minutes on 16 mm colour film. The film was obtained by popular science writer Maurice Burton, who did not show it to other researchers. A single frame was published in his 1961 book, "The Elusive Monster". His analysis concluded it was a floating object, not an animal. On 15 August 1938, William Fraser, chief constable of Inverness-shire, wrote a letter that the monster existed beyond doubt and expressed concern about a hunting party that had arrived (with a custom-made harpoon gun) determined to catch the monster "dead or alive". He believed his power to protect the monster from the hunters was "very doubtful". The letter was released by the National Archives of Scotland on 27 April 2010. In December 1954, sonar readings were taken by the fishing boat "Rival III". Its crew noted a large object keeping pace with the vessel at a depth of . It was detected for before contact was lost and regained. Previous sonar attempts were inconclusive or negative. Peter MacNab at Urquhart Castle on 29 July 1955 took a photograph that depicted two long black humps in the water. The photograph was not made public until it appeared in Constance Whyte's 1957 book on the subject. On 23 October 1958 it was published by the "Weekly Scotsman". Author Ronald Binns wrote that the "phenomenon which MacNab photographed could easily be a wave effect resulting from three trawlers travelling closely together up the loch." Other researchers consider the photograph a hoax. Roy Mackal requested to use the photograph in his 1976 book. He received the original negative from MacNab, but discovered it differed from the photograph that appeared in Whyte's book. The tree at the bottom left in Whyte's was missing from the negative. It is suspected that the photograph was doctored by re-photographing a print. Aeronautical engineer Tim Dinsdale filmed a hump that left a wake crossing Loch Ness in 1960. Dinsdale, who reportedly had the sighting on his final day of search, described it as reddish with a blotch on its side. He said that when he mounted his camera the object began to move, and he shot 40 feet of film. According to JARIC, the object was "probably animate". Others were sceptical, saying that the "hump" cannot be ruled out as being a boat and when the contrast is increased, a man in a boat can be seen. In 1993 Discovery Communications produced a documentary, "Loch Ness Discovered", with a digital enhancement of the Dinsdale film. A person who enhanced the film noticed a shadow in the negative that was not obvious in the developed film. By enhancing and overlaying frames, he found what appeared to be the rear body of a creature underwater: "Before I saw the film, I thought the Loch Ness Monster was a load of rubbish. Having done the enhancement, I'm not so sure". On 21 May 1977 Anthony "Doc" Shiels, camping next to Urquhart Castle, took "some of the clearest pictures of the monster until this day". Shiels, a magician and psychic, claimed to have summoned the animal out of the water. He later described it as an "elephant squid", claiming the long neck shown in the photograph is actually the squid's "trunk" and that a white spot at the base of the neck is its eye. Due to the lack of ripples, it has been declared a hoax by a number of people and received its name because of its staged look. On 26 May 2007, 55-year-old laboratory technician Gordon Holmes videotaped what he said was "this jet black thing, about long, moving fairly fast in the water." Adrian Shine, a marine biologist at the Loch Ness 2000 Centre in Drumnadrochit, described the footage as among "the best footage [he had] ever seen." BBC Scotland broadcast the video on 29 May 2007. "STV News North Tonight" aired the footage on 28 May 2007 and interviewed Holmes. Shine was also interviewed, and suggested that the footage was an otter, seal or water bird. On 24 August 2011 Loch Ness boat captain Marcus Atkinson photographed a sonar image of a , unidentified object that seemed to follow his boat for two minutes at a depth of , and ruled out the possibility of a small fish or seal. In April 2012, a scientist from the National Oceanography Centre said that the image is a bloom of algae and zooplankton. On 3 August 2012, skipper George Edwards claimed that a photo he took on 2 November 2011 shows "Nessie". Edwards claims to have searched for the monster for 26 years, and reportedly spent 60 hours per week on the loch aboard his boat, "Nessie Hunter IV", taking tourists for rides on the lake. Edwards said, "In my opinion, it probably looks kind of like a manatee, but not a mammal. When people see three humps, they're probably just seeing three separate monsters." Other researchers have questioned the photograph's authenticity, and Loch Ness researcher Steve Feltham suggested that the object in the water is a fibreglass hump used in a National Geographic Channel documentary in which Edwards had participated. Researcher Dick Raynor has questioned Edwards' claim of discovering a deeper bottom of Loch Ness, which Raynor calls "Edwards Deep". He found inconsistencies between Edwards' claims for the location and conditions of the photograph and the actual location and weather conditions that day. According to Raynor, Edwards told him he had faked a photograph in 1986 that he claimed was genuine in the Nat Geo documentary. Although Edwards admitted in October 2013 that his 2011 photograph was a hoax, he insisted that the 1986 photograph was genuine. A survey of the literature about other hoaxes, including photographs, published by "The Scientific American" on 10 July 2013, indicates many others since the 1930s. The most recent photo considered to be "good" appeared in newspapers in August 2012; it was allegedly taken by George Edwards in November 2011 but was "definitely a hoax" according to the science journal. On 27 August 2013, tourist David Elder presented a five-minute video of a "mysterious wave" in the loch. According to Elder, the wave was produced by a "solid black object" just under the surface of the water. Elder, 50, from East Kilbride, South Lanarkshire, was taking a picture of a swan at the Fort Augustus pier on the south-western end of the loch, when he captured the movement. He said, "The water was very still at the time and there were no ripples coming off the wave and no other activity on the water." Sceptics suggested that the wave may have been caused by a wind gust. On 19 April 2014, it was reported that a satellite image on Apple Maps showed what appeared to be a large creature (thought by some to be the Loch Ness Monster) just below the surface of Loch Ness. At the loch's far north, the image appeared about long. Possible explanations were the wake of a boat (with the boat itself lost in image stitching or low contrast), seal-caused ripples, or floating wood. Google commemorated the 81st anniversary of the "surgeon's photograph" with a Google Doodle, and added a new feature to Google Street View with which users can explore the loch above and below the water. Google reportedly spent a week at Loch Ness collecting imagery with a street-view "trekker" camera, attaching it to a boat to photograph above the surface and collaborating with members of the Catlin Seaview Survey to photograph underwater. After reading Rupert Gould's "The Loch Ness Monster and Others", Edward Mountain financed a search. Twenty men with binoculars and cameras positioned themselves around the loch from 9 am to 6 pm for five weeks, beginning on 13 July 1934. Although 21 photographs were taken, none was considered conclusive. Supervisor James Fraser remained by the loch filming on 15 September 1934; the film is now lost. Zoologists and professors of natural history concluded that the film showed a seal, possibly a grey seal. The "Loch Ness Phenomena Investigation Bureau" (LNPIB) was a UK-based society formed in 1962 by Norman Collins, R. S. R. Fitter, politician David James, Peter Scott and Constance Whyte "to study Loch Ness to identify the creature known as the Loch Ness Monster or determine the causes of reports of it". The society's name was later shortened to the Loch Ness Investigation Bureau (LNIB), and it disbanded in 1972. The LNIB had an annual subscription charge, which covered administration. Its main activity was encouraging groups of self-funded volunteers to watch the loch from vantage points with film cameras with telescopic lenses. From 1965 to 1972 it had a caravan camp and viewing platform at Achnahannet, and sent observers to other locations up and down the loch. According to the bureau's 1969 annual report it had 1,030 members, of whom 588 were from the UK. D. Gordon Tucker, chair of the Department of Electronic and Electrical Engineering at the University of Birmingham, volunteered his services as a sonar developer and expert at Loch Ness in 1968. His gesture, part of a larger effort led by the LNPIB from 1967 to 1968, involved collaboration between volunteers and professionals in a number of fields. Tucker had chosen Loch Ness as the test site for a prototype sonar transducer with a maximum range of . The device was fixed underwater at Temple Pier in Urquhart Bay and directed at the opposite shore, drawing an acoustic "net" across the loch through which no moving object could pass undetected. During the two-week trial in August, multiple targets were identified. One was probably a shoal of fish, but others moved in a way not typical of shoals at speeds up to 10 knots. In 1972, a group of researchers from the Academy of Applied Science led by Robert H. Rines conducted a search for the monster involving sonar examination of the loch depths for unusual activity. Rines took precautions to avoid murky water with floating wood and peat. A submersible camera with a floodlight was deployed to record images below the surface. If Rines detected anything on the sonar, he turned the light on and took pictures. On 8 August, Rines' Raytheon DE-725C sonar unit, operating at a frequency of 200 kHz and anchored at a depth of , identified a moving target (or targets) estimated by echo strength at in length. Specialists from Raytheon, Simrad (now Kongsberg Maritime), Hydroacoustics, Marty Klein of MIT and Klein Associates (a side-scan sonar producer) and Ira Dyer of MIT's Department of Ocean Engineering were on hand to examine the data. P. Skitzki of Raytheon suggested that the data indicated a protuberance projecting from one of the echoes. According to author Roy Mackal, the shape was a "highly flexible laterally flattened tail" or the misinterpreted return from two animals swimming together. Concurrent with the sonar readings, the floodlit camera obtained a pair of underwater photographs. Both depicted what appeared to be a rhomboid flipper, although sceptics have dismissed the images as depicting the bottom of the loch, air bubbles, a rock, or a fish fin. The apparent flipper was photographed in different positions, indicating movement. The first flipper photo is better-known than the second, and both were enhanced and retouched from the original negatives. According to team member Charles Wyckoff, the photos were retouched to superimpose the flipper; the original enhancement showed a considerably less-distinct object. No one is sure how the originals were altered. During a meeting with Tony Harmsworth and Adrian Shine at the Loch Ness Centre & Exhibition, Rines admitted that the flipper photo may have been retouched by a magazine editor. British naturalist Peter Scott announced in 1975, on the basis of the photographs, that the creature's scientific name would be "Nessiteras rhombopteryx" (Greek for "Ness inhabitant with diamond-shaped fin"). Scott intended that the name would enable the creature to be added to the British register of protected wildlife. Scottish politician Nicholas Fairbairn called the name an anagram for "Monster hoax by Sir Peter S". However, Rines countered that when rearranged, the letters could also spell "Yes, both pix are monsters – R." Another sonar contact was made, this time with two objects estimated to be about . The strobe camera photographed two large objects surrounded by a flurry of bubbles. Some interpreted the objects as two plesiosaur-like animals, suggesting several large animals living in Loch Ness. This photograph has rarely been published. A second search was conducted by Rines in 1975. Some of the photographs, despite their obviously murky quality and lack of concurrent sonar readings, did indeed seem to show unknown animals in various positions and lightings. One photograph appeared to show the head, neck, and upper torso of a plesiosaur-like animal, but sceptics argue the object is a log due to the lump on its "chest" area, the mass of sediment in the full photo, and the object's log-like "skin" texture. Another photograph seemed to depict a horned "gargoyle head", consistent with that of some sightings of the monster; however, sceptics point out that a tree stump was later filmed during Operation Deepscan in 1987, which bore a striking resemblance to the gargoyle head. In 2001, Rines' Academy of Applied Science videotaped a V-shaped wake traversing still water on a calm day. The academy also videotaped an object on the floor of the loch resembling a carcass and found marine clamshells and a fungus-like organism not normally found in freshwater lochs, a suggested connection to the sea and a possible entry for the creature. In 2008, Rines theorised that the creature may have become extinct, citing the lack of significant sonar readings and a decline in eyewitness accounts. He undertook a final expedition, using sonar and an underwater camera in an attempt to find a carcass. Rines believed that the animals may have failed to adapt to temperature changes resulting from global warming. Operation Deepscan was conducted in 1987. Twenty-four boats equipped with echo sounding equipment were deployed across the width of the loch, and simultaneously sent acoustic waves. According to BBC News the scientists had made sonar contact with an unidentified object of unusual size and strength. The researchers returned, re-scanning the area. Analysis of the echosounder images seemed to indicate debris at the bottom of the loch, although there was motion in three of the pictures. Adrian Shine speculated, based on size, that they might be seals that had entered the loch. Sonar expert Darrell Lowrance, founder of Lowrance Electronics, donated a number of echosounder units used in the operation. After examining a sonar return indicating a large, moving object at a depth of near Urquhart Bay, Lowrance said: "There's something here that we don't understand, and there's something here that's larger than a fish, maybe some species that hasn't been detected before. I don't know." In 2003, the BBC sponsored a search of the loch using 600 sonar beams and satellite tracking. The search had sufficient resolution to identify a small buoy. No animal of substantial size was found and, despite their reported hopes, the scientists involved admitted that this "proved" the Loch Ness Monster was a myth. "Searching for the Loch Ness Monster" aired on BBC One. An international team consisting of researchers from the universities of Otago, Copenhagen, Hull and the Highlands and Islands, did a DNA survey of the lake in June 2018, looking for unusual species. The results were published in 2019; there was no DNA of large fish such as sharks, sturgeons and catfish. There was no otter or seal DNA either. A lot of eel DNA was found. The leader of the study, Prof Neil Gemmell of the University of Otago, said he could not rule out the possibility of eels of extreme size, though none were found, nor were any ever caught. The other possibility is that the large amount of eel DNA simply comes from many small eels. No evidence of any reptilian sequences were found, he added, "so I think we can be fairly sure that there is probably not a giant scaly reptile swimming around in Loch Ness", he said. A number of explanations have been suggested to account for sightings of the creature. According to Ronald Binns, a former member of the Loch Ness Phenomena Investigation Bureau, there is probably no single explanation of the monster. Binns wrote two sceptical books, the 1983 "The Loch Ness Mystery Solved", and his 2017 "The Loch Ness Mystery Reloaded". In these he contends that an aspect of human psychology is the ability of the eye to see what it wants, and expects, to see. They may be categorised as misidentifications of known animals, misidentifications of inanimate objects or effects, reinterpretations of Scottish folklore, hoaxes, and exotic species of large animals. A reviewer wrote that Binns had "evolved into the author of ... the definitive, skeptical book on the subject". Binns does not call the sightings a hoax, but "a myth in the true sense of the term" and states that the "'monster is a sociological ... phenomenon. ...After 1983 the search ... (for the) possibility that there just "might" be continues to enthrall a small number for whom eye-witness evidence outweighs all other considerations". Wakes have been reported when the loch is calm, with no boats nearby. Bartender David Munro reported a wake he believed was a creature zigzagging, diving, and reappearing; there were reportedly 26 other witnesses from a nearby car park. Although some sightings describe a V-shaped wake similar to a boat's, others report something not conforming to the shape of a boat. A large eel was an early suggestion for what the "monster" was. Eels are found in Loch Ness, and an unusually large one would explain many sightings. Dinsdale dismissed the hypothesis because eels undulate side to side like snakes. Sightings in 1856 of a "sea-serpent" (or kelpie) in a freshwater lake near Leurbost in the Outer Hebrides were explained as those of an oversized eel, also believed common in "Highland lakes". From 2018 to 2019, scientists from New Zealand undertook a massive project to document every organism in Loch Ness based on DNA samples. Their reports confirmed that European eels are still found in the Loch. No DNA samples were found for large animals such as catfish, Greenland sharks, or plesiosaurs. Many scientists now believe that giant eels account for many, if not most of the sightings. In a 1979 article, California biologist Dennis Power and geographer Donald Johnson claimed that the "surgeon's photograph" was the top of the head, extended trunk and flared nostrils of a swimming elephant photographed elsewhere and claimed to be from Loch Ness. In 2006, palaeontologist and artist Neil Clark suggested that travelling circuses might have allowed elephants to bathe in the loch; the trunk could be the perceived head and neck, with the head and back the perceived humps. In support of this, Clark provided a painting. Zoologist, angler and television presenter Jeremy Wade investigated the creature in 2013 as part of the series "River Monsters", and concluded that it is a Greenland shark. The Greenland shark, which can reach up to 20 feet in length, inhabits the North Atlantic Ocean around Canada, Greenland, Iceland, Norway, and possibly Scotland. It is dark in colour, with a small dorsal fin. According to biologist Bruce Wright, the Greenland shark could survive in fresh water (possibly using rivers and lakes to find food) and Loch Ness has an abundance of salmon and other fish. In July 2015 three news outlets reported that Steve Feltham, after a vigil at the loch that was recognized by the Guinness Book of Records, theorised that the monster is an unusually-large specimen of Wels catfish ("Silurus glanis"), which may have been released during the late 19th century. It is difficult to judge the size of an object in water through a telescope or binoculars with no external reference. Loch Ness has resident otters, and photos of them and deer swimming in the loch, which were cited by author Ronald Binns may have been misinterpreted. According to Binns, birds may be mistaken for a "head and neck" sighting. In 1933, the "Daily Mirror" published a picture with the caption: "This queerly-shaped tree-trunk, washed ashore at Foyers [on Loch Ness] may, it is thought, be responsible for the reported appearance of a 'Monster. In a 1982 series of articles for "New Scientist", Maurice Burton proposed that sightings of Nessie and similar creatures may be fermenting Scots pine logs rising to the surface of the loch. A decomposing log could not initially release gases caused by decay because of its high resin level. Gas pressure would eventually rupture a resin seal at one end of the log, propelling it through the water (sometimes to the surface). According to Burton, the shape of tree logs (with their branch stumps) closely resembles descriptions of the monster. Loch Ness, because of its long, straight shape, is subject to unusual ripples affecting its surface. A seiche is a large oscillation of a lake, caused by water reverting to its natural level after being blown to one end of the lake (resulting in a standing wave); the Loch Ness oscillation period is 31.5 minutes. Wind conditions can give a choppy, matte appearance to the water with calm patches appearing dark from the shore (reflecting the mountains). In 1979 W. H. Lehn showed that atmospheric refraction could distort the shape and size of objects and animals, and later published a photograph of a mirage of a rock on Lake Winnipeg that resembled a head and neck. Italian geologist Luigi Piccardi has proposed geological explanations for ancient legends and myths. Piccardi noted that in the earliest recorded sighting of a creature (the "Life of Saint Columba"), the creature's emergence was accompanied ""cum ingenti fremitu"" ("with loud roaring"). The Loch Ness is along the Great Glen Fault, and this could be a description of an earthquake. Many reports consist only of a large disturbance on the surface of the water; this could be a release of gas through the fault, although it may be mistaken for something swimming below the surface. In 1980 Swedish naturalist and author Bengt Sjögren wrote that present beliefs in lake monsters such as the Loch Ness Monster are associated with kelpie legends. According to Sjögren, accounts of loch monsters have changed over time; originally describing horse-like creatures, they were intended to keep children away from the loch. Sjögren wrote that the kelpie legends have developed into descriptions reflecting a modern awareness of plesiosaurs. The kelpie as a water horse in Loch Ness was mentioned in an 1879 Scottish newspaper, and inspired Tim Dinsdale's "Project Water Horse". A study of pre-1933 Highland folklore references to kelpies, water horses and water bulls indicated that Ness was the loch most frequently cited. A number of hoax attempts have been made, some of which were successful. Other hoaxes were revealed rather quickly by the perpetrators or exposed after diligent research. A few examples follow. In August 1933, Italian journalist Francesco Gasparini submitted what he said was the first news article on the Loch Ness Monster. In 1959, he reported sighting a "strange fish" and fabricated eyewitness accounts: "I had the inspiration to get hold of the item about the strange fish. The idea of the monster had never dawned on me, but then I noted that the strange fish would not yield a long article, and I decided to promote the imaginary being to the rank of monster without further ado." In the 1930s, big-game hunter Marmaduke Wetherell went to Loch Ness to look for the monster. Wetherell claimed to have found footprints, but when casts of the footprints were sent to scientists for analysis they turned out to be from a hippopotamus; a prankster had used a hippopotamus-foot umbrella stand. In 1972 a team of zoologists from Yorkshire's Flamingo Park Zoo, searching for the monster, discovered a large body floating in the water. The corpse, long and weighing as much as 1.5 tonnes, was described by the Press Association as having "a bear's head and a brown scaly body with clawlike fins." The creature was placed in a van to be carried away for testing, but police seized the cadaver under an act of parliament prohibiting the removal of "unidentified creatures" from Loch Ness. It was later revealed that Flamingo Park education officer John Shields shaved the whiskers and otherwise disfigured a bull elephant seal that had died the week before and dumped it in Loch Ness to dupe his colleagues. On 2 July 2003, Gerald McSorely discovered a fossil, supposedly from the creature, when he tripped and fell into the loch. After examination, it was clear that the fossil had been planted. In 2004 a Five TV documentary team, using cinematic special-effects experts, tried to convince people that there was something in the loch. They constructed an animatronic model of a plesiosaur, calling it "Lucy". Despite setbacks (including Lucy falling to the bottom of the loch), about 600 sightings were reported where she was placed. In 2005, two students claimed to have found a large tooth embedded in the body of a deer on the loch shore. They publicised the find, setting up a website, but expert analysis soon revealed that the "tooth" was the antler of a muntjac. The tooth was a publicity stunt to promote a horror novel by Steve Alten, "The Loch." In 1933 it was suggested that the creature "bears a striking resemblance to the supposedly extinct plesiosaur", a long-necked aquatic reptile that became extinct during the Cretaceous–Paleogene extinction event. A popular explanation at the time, the following arguments have been made against it: In response to these criticisms, Tim Dinsdale, Peter Scott and Roy Mackal postulate a trapped marine creature that evolved from a plesiosaur directly or by convergent evolution. Robert Rines explained that the "horns" in some sightings function as breathing tubes (or nostrils), allowing it to breathe without breaking the surface. R. T. Gould suggested a long-necked newt; Roy Mackal examined the possibility, giving it the highest score (88 percent) on his list of possible candidates. In 1968 F. W. (Ted) Holiday proposed that Nessie and other lake monsters, such as Morag, may be a large invertebrate such as a bristleworm; he cited the extinct "Tullimonstrum" as an example of the shape. According to Holiday, this explains the land sightings and the variable back shape; he likened it to the medieval description of dragons as "worms". Although this theory was considered by Mackal, he found it less convincing than eels, amphibians or plesiosaurs.
https://en.wikipedia.org/wiki?curid=17877
Laser science Laser science or laser physics is a branch of optics that describes the theory and practice of lasers. Laser science is principally concerned with quantum electronics, laser construction, optical cavity design, the physics of producing a population inversion in laser media, and the temporal evolution of the light field in the laser. It is also concerned with the physics of laser beam propagation, particularly the physics of Gaussian beams, with laser applications, and with associated fields such as nonlinear optics and quantum optics. Laser science predates the invention of the laser itself. Albert Einstein created the foundations for the laser and maser in 1917, via a paper in which he re-derived Max Planck’s law of radiation using a formalism based on probability coefficients (Einstein coefficients) for the absorption, spontaneous emission, and stimulated emission of electromagnetic radiation. The existence of stimulated emission was confirmed in 1928 by Rudolf W. Ladenburg. In 1939, Valentin A. Fabrikant made the earliest laser proposal. He specified the conditions required for light amplification using stimulated emission. In 1947, Willis E. Lamb and R. C. Retherford found apparent stimulated emission in hydrogen spectra and effected the first demonstration of stimulated emission; in 1950, Alfred Kastler (Nobel Prize for Physics 1966) proposed the method of optical pumping, experimentally confirmed, two years later, by Brossel, Kastler, and Winter. The theoretical principles describing the operation of a microwave laser (a maser) were first described by Nikolay Basov and Alexander Prokhorov at the "All-Union Conference on Radio Spectroscopy" in May 1952. The first maser was built by Charles H. Townes, James P. Gordon, and Herbert J. Zeiger in 1953. Townes, Basov and Prokhorov were awarded the Nobel Prize in Physics in 1964 for their research in the field of stimulated emission. Arthur Ashkin, Gérard Mourou, and Donna Strickland were awarded the Nobel Prize in Physics in 2018 for groundbreaking inventions in the field of laser physics. The first working laser (a pulsed ruby laser) was demonstrated on May 16, 1960, by Theodore Maiman at the Hughes Research Laboratories.
https://en.wikipedia.org/wiki?curid=17878
Lincoln, England Lincoln () is a cathedral city and county town of Lincolnshire in the East Midlands of England. The non-metropolitan district of Lincoln had a 2012 population of 94,600. The 2011 census gave the urban area of Lincoln, which includes North Hykeham and Waddington, a population of 130,200. Roman "Lindum Colonia" developed from an Iron Age settlement on the River Witham. The city's landmarks include Lincoln Cathedral, an example of English Gothic architecture and the tallest building in the world for over 200 years, and the 11th-century Norman Lincoln Castle. The city is home to the University of Lincoln and Bishop Grosseteste University, and to Lincoln City FC and Lincoln United FC. The earliest origins of Lincoln can be traced to the remains of an Iron Age settlement of round wooden dwellings (which were discovered by archaeologists in 1972) that have been dated to the 1st century BC. This settlement was built by a deep pool (the modern Brayford Pool) in the River Witham at the foot of a large hill (on which the Normans later built Lincoln Cathedral and Lincoln Castle). The origins of the name Lincoln may come from this period, when the settlement is thought to have been named in the Brittonic language of Iron Age Britain's Celtic inhabitants as "Lindon" "The Pool", presumably referring to Brayford Pool (compare the etymology of the name Dublin, from the Gaelic "dubh linn" "black pool"). The extent of this original settlement is unknown as its remains are now buried deep beneath the later Roman and medieval ruins and modern Lincoln. The Romans conquered this part of Britain in AD 48 and shortly afterwards built a legionary fortress high on a hill overlooking the natural lake formed by the widening of the River Witham (the modern day Brayford Pool) and at the northern end of the Fosse Way Roman road (A46). The Celtic name "Lindon" was subsequently Latinised to "Lindum" and given the title "Colonia" when it was converted into a settlement for army veterans. The conversion to a "colonia" was made when the legion moved on to York ("Eboracum") in AD 71. Lindum colonia or more fully, Colonia Domitiana Lindensium, after the Emperor Domitian who ruled at the time, was established within the walls of the hilltop fortress with the addition of an extension of about equal area, down the hillside to the waterside below. It became a major flourishing settlement, accessible from the sea both through the River Trent and through the River Witham. On the basis of the patently corrupt list of British bishops who attended the 314 Council of Arles, the city is now often considered to have been the capital of the province of Flavia Caesariensis which was formed during the late-3rd century Diocletian Reforms. Subsequently, the town and its waterways fell into decline. By the close of the 5th century it was largely deserted, although some occupation continued under a "Praefectus Civitatis" – Saint Paulinus visited a man holding this office in Lincoln in AD 629. Germanic tribes from the North Sea area settled Lincolnshire during the fifth and sixth centuries. The Latin "Lindum Colonia" was shortened in their language, Old English, first to Lindocolina, then to Lincylene. After the first Viking raids, the city again rose to some importance, with overseas trading connections. In Viking times Lincoln was a trading centre with its own mint, by far the most important in Lincolnshire and by the end of the 10th century, comparable in output to that of York. After the establishment of the Danelaw in 886, Lincoln became one of the Five Boroughs in the East Midlands. Excavations at Flaxengate reveal that the area, deserted since Roman times, received timber-framed buildings fronting a new street system in about 900. Lincoln underwent an economic explosion with the settlement of the Danes. Like York, the Upper City seems to have had purely administrative functions up to 850 or so, while the Lower City, down the hill towards the River Witham, may have been largely deserted. By 950, however, the Witham banks were developed, with the Lower City resettled and the suburb of Wigford emerging as a trading centre. In 1068, two years after the Norman conquest of England, William I ordered Lincoln Castle to be built on the site of the old Roman settlement, for the same strategic reasons and controlling the same road. Construction of the first Lincoln Cathedral, within its "close" or walled precinct facing the castle, began when the see was removed from the quiet backwater of Dorchester-on-Thames, Oxfordshire, and was completed in 1092; it was rebuilt after a fire but destroyed by an earthquake in 1185. The rebuilt Lincoln Minster, enlarged to the east at each rebuilding, was on a magnificent scale, its crossing tower crowned by a spire reputed to have been the highest in Europe at . When completed, the central of the three spires is widely accepted to have succeeded the Great Pyramids of Egypt as the tallest man-made structure in the world. The Lincoln bishops were among the magnates of medieval England. The Diocese of Lincoln, the largest in England, had more monasteries than the rest of England put together, and the diocese was supported by large estates. When Magna Carta was drawn up in 1215, one of the witnesses was Hugh of Wells, Bishop of Lincoln. One of only four surviving originals of the document is preserved in Lincoln Castle. Among the most famous bishops of Lincoln were Robert Bloet, the magnificent justiciar to Henry I, Hugh of Avalon, the cathedral builder canonised as St Hugh of Lincoln, Robert Grosseteste, the 13th century intellectual, Henry Beaufort, chancellor of Henry V and Henry VI, Thomas Rotherham, a politician deeply involved in the Wars of the Roses, Philip Repyngdon, chaplain to Henry IV and defender of Wycliffe, and Thomas Wolsey, the lord chancellor of Henry VIII. Theologian William de Montibus was the head of the cathedral school and chancellor until his death in 1213. The administrative centre was the Bishop's Palace, the third element in the central complex. When it was built in the late 12th century, the Bishop's Palace was one of the most important buildings in England. Built by Hugh of Lincoln, its East Hall over a vaulted undercroft is the earliest surviving example of a roofed domestic hall. The chapel range and entrance tower were built by Bishop William of Alnwick, who modernised the palace in the 1430s. Both Henry VIII and James I were guests there; the palace was sacked by royalist troops during the civil war in 1648. During the Anarchy, in 1141 Lincoln was the site of a battle between King Stephen and the forces of Empress Matilda, led by her illegitimate half-brother Robert, 1st Earl of Gloucester. After fierce fighting in the city's streets, Stephen's forces were defeated. Stephen himself was captured and taken to Bristol. By 1150, Lincoln was among the wealthiest towns in England. The basis of the economy was cloth and wool, exported to Flanders; Lincoln weavers had set up a guild in 1130 to produce Lincoln Cloth, especially the fine dyed "scarlet" and "green", whose reputation was later enhanced by the legendary Robin Hood wearing woollens of Lincoln green. In the Guildhall that surmounts the city gate called the Stonebow, the ancient Council Chamber contains Lincoln's civic insignia, a fine collection of civic regalia. Outside the precincts of cathedral and castle, the old quarter clustered around the Bailgate, and down Steep Hill to the High Bridge, whose half-timbered housing juts out over the river. There are three ancient churches: St Mary le Wigford and St Peter at Gowts are both 11th century in origin and St Mary Magdalene, built in the late 13th century. The last is an unusual English dedication to the saint whose cult was coming into vogue on the European continent at that time. Lincoln was home to one of the five main Jewish communities in England, well established before it was officially noted in 1154. In 1190, anti-Semitic riots that started in King's Lynn, Norfolk, spread to Lincoln; the Jewish community took refuge with royal officials, but their habitations were plundered. The so-called House of Aaron has a two-storey street frontage that is essentially 12th century and a nearby Jew's House likewise bears witness to the Jewish population. In 1255, the affair called 'The Libel of Lincoln' in which prominent Jews of Lincoln, accused of the ritual murder of a Christian boy (the Little Saint Hugh of Lincoln in medieval folklore) were sent to the Tower of London and 18 were executed. The Jews were expelled in total in 1290. Thirteenth-century Lincoln was England's third largest city and a favourite of more than one king. During the First Barons' War, it became caught up in the strife between the king and rebel barons who had allied with the French. It was here and at Dover that the French and Rebel army was defeated. In the aftermath, the town was pillaged for having sided with Prince Louis. In the Second Barons' War, of 1266, the disinherited rebels attacked the Jews of Lincoln, ransacked the synagogue, and burned the records which registered debts. According to some historians, the city's fortunes began to decline in the 14th century, although others have argued that the city remained buoyant in both trade and communications well into the 15th century. Thus in 1409, the city was made a county in its own right known as the County of the City of Lincoln. Thereafter, additional rights being conferred on the town by successive monarchs, including those of an assay town (controlling metal manufacturing, for example). The oldest surviving secular drama in English, "The Interlude of the Student and the Girl" (c. 1300), may have originated from Lincoln. Lincoln's coat of arms, not officially endorsed by the College of Arms, is believed to date from the 14th century. It is "Argent on a cross gules a fleur-de-lis or". The cross is believed to derive from the Diocese of Lincoln. The fleur-de-lis is the symbol of the Virgin Mary, to whom the cathedral is dedicated. The motto is CIVITAS LINCOLNIA (Latin for City of Lincoln). The Dissolution of the Monasteries cut off Lincoln's main source of diocesan income and dried up the network of patronage controlled by the bishop. No fewer than seven monasteries closed within the city alone. A number of nearby abbeys were also closed, which further diminished the region's political power. A symbol of Lincoln's economic and political decline came in 1549, when the cathedral's great spire rotted and collapsed and was not replaced. However, the comparative poverty of post-medieval Lincoln preserved pre-medieval structures that would probably have been lost under more prosperous conditions. Between 1642 and 1651, during the English Civil War, Lincoln was on the frontier between the Royalist and Parliamentary forces and changed hands several times. Many buildings were badly damaged. Lincoln now had no major industry and no easy access to the sea. While the rest of the country was beginning to prosper in the early 18th century, Lincoln suffered, travellers often commenting on what had essentially become a one-street town. By the Georgian era, Lincoln's fortunes began to pick up, thanks in part to the Agricultural Revolution. The re-opening of the Foss Dyke canal allowed coal and other raw materials vital to industry to be brought into the city more easily. Along with the economic growth of Lincoln in this period, the city boundaries were expanded to include the West Common. To this day, an annual Beat the Boundaries walk takes place along the perimeter of the common. Coupled with the arrival of railway links, Lincoln boomed again during the Industrial Revolution, and several world-famous companies arose, such as Ruston's, Clayton's, Proctor's and William Foster's. Lincoln began to excel in heavy engineering, building locomotives, steam shovels and all manner of heavy machinery. A permanent military presence came with the completion of the "Old Barracks" (now occupied by the Museum of Lincolnshire Life) in 1857. These were replaced by the "New Barracks" (now Sobraon Barracks) in 1890, when Lincoln Drill Hall in Broadgate also opened. Lincoln was hit by a typhoid epidemic between November 1904 and August 1905 caused by polluted drinking water from Hartsholme Lake and the River Witham. Over 1,000 people contracted the disease and fatalities totalled 113, including the man responsible for the city's water supply, Liam Kirk of Baker Crescent. Near the beginning of the epidemic, Dr Alexander Cruickshank Houston installed a chlorine disinfection system just ahead of the poorly operating slow sand filter to kill the fatal bacteria. Chlorination of the water supply continued until 1911 when a new supply was implemented. The Lincoln chlorination episode was one of the first uses of the chemical to disinfect a water supply. Westgate Water Tower was constructed to provide new water supplies to the city. In the two world wars, Lincoln switched to war production. The first ever tanks were invented, designed and built in Lincoln by William Foster & Co. in the First World War and population growth provided more workers for even greater expansion. The tanks were tested on land now covered by Tritton Road in the south-west suburbs. During the Second World War, Lincoln produced a vast array of war goods: tanks, aircraft, munitions and military vehicles. Ruston & Hornsby produced diesel engines for ships and locomotives, then by teaming up with former colleagues of Frank Whittle and Power Jets Ltd, in the early 1950s, R & H (which became RGT) opened the first production line to build gas turbine engines for land-based and sea-based energy production. Its success made it the largest single employer in the city, providing over 5,000 jobs in its factory and research facilities, making it a rich takeover target for industrial conglomerates. It was subsumed by English Electric in November 1966, which was then bought by GEC in 1968, with diesel engine production being transferred to the Ruston Diesels Division in Newton-le-Willows, Lancashire, at the former Vulcan Foundry, which was eventually bought by the German MAN Diesel (now MAN Diesel & Turbo) in June 2000. Pelham Works merged with Alstom of France in the late 1980s, then was bought in 2003 by Siemens of Germany as Siemens Industrial Turbomachinery. This includes what is left of Napier Turbochargers. Plans came early in 2008 for a new plant outside the city at Teal Park, North Hykeham. However, Siemens made large-scale redundancies and moved jobs to both Sweden and the Netherlands. The factory now employs 1300 people. R & H's former Beevor Foundry is now owned by Hoval Group, which makes industrial boilers (wood chip). The Aerospace Manufacturing Facility (AMF) in Firth Road passed from Alstom Aerospace Ltd to ITP Engines UK in January 2009. Lincoln's second largest private employer is James Dawson and Son, a belting and hose manufacturer founded there in the late 19th century. Its two sites are both in Tritton Road. The main one, next to the University of Lincoln, used Lincoln's last coal-fired boiler, until it was replaced by a gas-powered one in July 2018. Dawson's became part of the Hull-based Fenner group in the late 1970s. New suburbs were built in the years after 1945, but heavy industry declined towards the end of the 20th century. Nevertheless, more people in Lincoln are still employed today in building gas turbines than in any other field. Much development, particularly around the Brayford area, has followed the construction of the University of Lincoln's Brayford Campus, which opened in 1996. In 2012, Bishop Grosseteste teaching college was also awarded university status. Lincoln's economy is based mainly on public administration, commerce, arable farming and tourism, with industrial relics like Ruston (now Siemens) still in existence. However, many of Lincoln's industrial giants have long ceased production there, leaving large empty industrial warehouse-like buildings. More recently, these have become multi-occupant units, with the likes of Lincs FM radio station (in the "Titanic Works") and LA Fitness gym taking up space. The main employment sectors in Lincoln are public administration, education and health, which account for 34 per cent of the workforce. Distribution, restaurants and hotels account for 25 per cent. Like many other cities, Lincoln has developed a growing IT economy, with many e-commerce mail order companies set up, along with a plethora of other, more conventional small industrial businesses. One reason behind the University of Lincoln was to increase inward investment and act as a springboard for small companies. Its presence has also drawn many more licensed premises to the town centre around the Brayford Pool. A small business unit next door to a university accommodation building, the Think Tank, opened in June 2009. The Extra motorway services company is based on "Castle Hill", with most new UK service areas being built by Swayfields, which is the parent company. There are two main electronics companies in the town: Chelmsford-based e2V (formerly Associated Electrical Industries before 1961) is situated between "Carholme Road" (A57) and the Foss Dyke next-door to Carholme Golf Club; and Dynex Semiconductor (formerly Marconi Electronic Devices) is in Doddington Road (B1190) near the A46 bypass and near North Hykeham. Bifrangi, an Italian company making crankshafts for off-road vehicles (tractors) using a screw press is based at the former "Tower Works" owned by Smith-Clayton Forge Ltd. Lincoln is the hub of a wider area encompassing satellite settlements such as Welton, Saxilby, Skellingthorpe and Washingborough, which look to Lincoln for most service and employment needs. Adding them to the city's population raises it to 165,000. Lincoln is the main centre for jobs and facilities in Central Lincolnshire, and performs a regional role over much of Lincolnshire and parts of Nottinghamshire. According to a document entitled "Central Lincolnshire Local Plan Core Strategy", Lincoln is within a "travel-to-work" area with a population of about 300,000. Since 1994 Lincoln has gained two universities, in association with its growth in the services sector. New blocks of flats, restaurants and entertainment venues have appeared. Entertainment venues linked to the universities include The Engine Shed and The Venue Cinema. Around the Tritton Road (B1003) trading estate, new businesses have begun trading from large units with car parking. Lincoln has a choice of seven large national supermarkets (Tesco, Asda, Sainsbury, Waitrose, Morrisons, Aldi and Lidl). The St Mark's Square complex has Debenhams as its flagship store and an accompanying trading estate of well-known chain stores. The city is a tourist centre for those visiting historic buildings that include the cathedral, the castle and the medieval Bishop's Palace. The Collection, of which the Usher Gallery is now part, is an important attraction, partly in a purpose-built venue, it currently contains over 2,000,000 objects, and was one of the four finalists for the 2006 Gulbenkian Prize. Any material from official archaeological excavations in Lincolnshire is eventually deposited in The Collection, so that it continues to grow. Other attractions include the Museum of Lincolnshire Life and the International Bomber Command Centre. Tranquil destinations close by are Whisby Nature Reserve and Hartsholme Country Park (including the Swanholme Lakes SSSI), while noisier entertainment can be found at Waddington airfield, Scampton airfield (base of the RAF's Red Arrows jet aerobatic team), the County Showground or the Cadwell Park motor racing circuit near Louth. Early in December the Bailgate area holds an annual Christmas Market in and around the Castle grounds, shaped by the traditional German-style Christmas markets as found in cities, including Lincoln's twin town Neustadt an der Weinstrasse. In 2010, for the first time in its history, the event was cancelled due to "atrocious" snowfalls across most of the United Kingdom. Lincoln lies north of London, at a height of above sea level by the River Witham up to on Castle Hill. It fills a gap in the Lincoln Cliff escarpment running north and south through central Lincolnshire and with altitudes up to . The city is 76 miles (123 km) north-east of Birmingham, 32 miles (51 km) north-east of Nottingham, 47 miles (76 km) north of Peterborough and 40 miles (64 km) east south-east of Sheffield. The city lies on the River Witham, which flows through this gap. Lincoln is thus divided informally into two zones, known unofficially as uphill and downhill. The uphill area comprises the northern part of the city, on top of the Lincoln Cliff (to the north of the gap). This area includes the historical quarter, including Lincoln Cathedral, Lincoln Castle and the Medieval Bishop's Palace, known locally as The Bail (although described in tourist promotional literature as the Cathedral Quarter). It includes residential suburbs to the north and north-east. The downhill area comprises the city centre and suburbs to the south and south-west. Steep Hill is a narrow, pedestrian street connecting the two (too steep for vehicular traffic). It passes through an archway known as the Stonebow. This divide, peculiar to Lincoln, was once an important class distinction, with uphill more affluent and downhill less so. The distinction dates from the time of the Norman conquest, when the religious and military elite occupied the hilltop. The expansion of suburbs in both parts of the city since the mid-19th century has diluted the distinction, but uphill housing continues to fetch a premium. The mute swan is an iconic species for Lincoln. Many pairs nest each year beside the Brayford, and they feature on the university's heraldic emblem. Other bird life within the city includes peregrine falcon, tawny owl and common kingfisher. Mammals on the city edges include red fox, roe deer and least weasel. European perch, northern pike and bream are among fish seen in the Witham and Brayford. Nature reserves around the city include Greetwell Hollow SSSI, Swanholme SSSI, Whisby Nature Park, Boultham Mere and Hartsholme Country Park. Since about 2016, Little egrets have nested in the Birchwood area and otters have been seen in the River Witham. Both species are native to Britain and repopulating the area after extermination. Several invasive species of plants and animals have reached Lincoln. Japanese knotweed and Himalayan balsam are Asian plant species around the River Witham. Galinsoga and Amsinckia are American species found among city weeds. American mink are occasionally seen on the Witham. Lincoln has a typical East Midland maritime climate with cool summers and mild winters. The nearest Met Office weather station is at RAF Waddington, about to the south. Temperature extremes since 1948 have ranged between on 25 July 2019, and in February 1956. A former weather station holds the record for the lowest daytime maximum temperature recorded in England in the month of December: on 17 December 1981. The coldest recent temperature was in December 2010, although another weather station, at Scampton, a similar distance north of the city centre, fell to , so equalling Waddington's record low set in 1956. Closure of Lincoln St Marks in 1985 left the city with Lincoln Central, soon renamed plain Lincoln, as its own station. Destinations from its five platforms include Newark-on-Trent, Sheffield, Leeds, Wakefield, Nottingham, Grimsby and Peterborough. London North Eastern Railway runs services to London King's Cross, calling at Newark, Peterborough and Stevenage. The £19-million A46 north-west bypass opened in December 1985, with an eastern A15 bypass scheduled to commence construction in 2017, but the collapse of the contractor, Carillion, means it is now scheduled to open in May 2020. The final, southern part of the Lincoln ring road is now under construction (2020). B1190 is an east–west road through Lincoln, between the Nottinghamshire-Lincolnshire boundary on the (Roman) Foss Dyke and A57 and in the east at Thimbleby on the A158 near Horncastle. Until the 1980s, the only two main roads through Lincoln were the A46 and A15, both feeding traffic along the High Street. At the intersection of Guildhall Street and the High Street, these met the termination of the A57. North of the city centre, the former A15, Riseholme Road, is the B1226, and the old A46, Nettleham Road, the B1182. The early northern inner ring-road, formed of Yarborough Road and Yarborough Crescent, is today numbered B1273. East Midlands Airport, 43 miles from Lincoln, is the main international airport serving the county. It mainly handles European flights with low-cost airlines. Humberside Airport, 29 miles north of Lincoln, is the only airport located in the county. It has a small number of flights mainly to hub airports such as Amsterdam. Doncaster Sheffield Airport also serves Lincoln. It mainly caters to low-cost airlines and lies just outside the East Midlands Region in South Yorkshire. The older of Lincoln's two higher education institutions, Bishop Grosseteste University, was started as a teacher training college linked to the Anglican Church in 1862. During the 1990s, it branched out into other subject areas with a focus on the arts and drama. It became a university college in 2006 with degree powers taken over from the University of Leicester. The college became a university in 2012. An annual graduation celebration takes place in Lincoln Cathedral. Bishop Grosseteste University has not linked with the University of Lincoln. The larger University of Lincoln started as the University of Lincolnshire and Humberside in 1996, when the University of Humberside opened a Lincoln campus next to Brayford Pool. Lincoln School of Art and Design (which was Lincolnshire's main outlet for higher education) and Riseholme Agricultural College, previously been part of De Montfort University in Leicester, were absorbed into the University of Lincoln in 2001, and subsequently the Lincoln campus took priority over the Hull campus. Most buildings were built after 2001. The name changed to the University of Lincoln in September 2002. In the 2005–2006 academic year, 8,292 full-time undergraduates were studying there. This rose to 11,900. Further education courses in Lincoln are provided by Lincoln College, which is the largest education institution in Lincolnshire, with 18,500 students, of whom 2,300 are full-time. There is a specialist creative college, Access Creative, offering courses in music, media and games design to some 180 students, all full-time. The school system in Lincoln is anomalous within Lincolnshire despite being part of the same local education authority (LEA), as most of the county retained the grammar-school system. Other areas near Lincoln, such as North Hykeham North Kesteven School, Branston and Cherry Willingham, also have comprehensive schools. In 1952, William Farr School was founded in Welton, a nearby village. Lincoln itself had four single-sex grammar schools until September 1974. The Priory Academy LSST converted to academy status in 2008, in turn establishing The Priory Federation of Academies. The Priory Witham Academy was formed when the federation absorbed Moorlands Infant School, Usher Junior School and Ancaster High School. The Priory City of Lincoln Academy was formed when the City of Lincoln Community College merged into the federation. Both schools were rebuilt after substantial investment by the federation. Cherry Willingham School joined the federation in 2017, becoming The Priory Pembroke Academy. The Lincolnshire LEA was ranked 32nd in the country based on its proportion of pupils attaining at least 5 A–C grades at GCSE including maths and English (62.2% compared with a national average of 58.2%). There are four special-needs schools in Lincoln: Fortuna Primary School (5–11 years old), Sincil Sports College (11–16), St Christopher's School (3–16) and St Francis Community Special School (2–18). The local newspaper, the "Lincolnshire Echo", was founded in 1894. Local radio stations are BBC Lincolnshire on 94.9 FM, its commercial rival Lincs FM on 102.2FM and Lincoln City Radio on 103.6 FM a community radio station catering mainly for listeners over 50 . "The Lincolnite" is an online mobile publication covering the greater-Lincoln area. Local listeners can also receive Siren FM, on 107.3 FM from the University of Lincoln. The student publication "The Linc" is available online and in print and targets the University of Lincoln's student population. BBC Look North has a bureau in Lincoln as part of its coverage of Lincolnshire and East Yorkshire. The three TV reporters based in Lincoln serve both BBC Look North and East Midlands Today. ITV News also hold a newsroom in Lincoln. Lincoln has a professional football team, Lincoln City FC, nicknamed "The Imps", which plays at the Sincil Bank stadium on the southern edge of the city. The collapse of ITV Digital, which owed Lincoln City FC more than £100,000, in 2002 saw the team faced with bankruptcy, but it was saved by a fund-raising venture among the fans, which returned ownership of the club to them, where it has remained since. The club was famously the first team to be relegated from the English Football League, when automatic relegation to the Football Conference was introduced from the 1986–87 season. Lincoln City regained its league place at the first attempt and held onto it until the 2010–11 season, when it was again relegated to the Football Conference. Its most successful era was in the early 1980s, winning promotion from the Fourth Division in 1981 and narrowly missing promotion to the Second Division in the two years that followed. More recently, the club reached the quarter-finals of the FA Cup in 2017, beating several teams in the top two tiers of English football before being defeated by Arsenal. Lincoln City was the first club managed by Graham Taylor, who went on to manage the English national football team from 1990 to 1993. He was at Lincoln City from 1972 to 1977, during which time the club won promotion from the Fourth Division as champions in 1976. The club also won the Football League Division Three North title on three separate occasions, a joint record. Lincoln is also home to Lincoln United FC, Lincoln Moorlands Railway FC and Lincoln Griffins Ladies FC. Lincoln hosts upcoming sports teams and facilities such American football's Lincolnshire Bombers, which plays in the BAFA National Leagues, the Lincolnshire Bombers Roller Girls, the Imposters Rollergirls, and hosts Lincoln Rowing centre on the River Witham. Lindum Hockey Club plays in the north of the city. Since 1956 the city has played host to the Lincoln Grand Prix one-day cycle race, which for around 30 years or so has used a city-centre finishing circuit incorporating the challenging 1-in-6 cobbled ascent of Michaelgate. Since 2013 the city has also boasted its own professional wrestling promotion and training academy, Lincoln Fight Factory Wrestling. The Lincoln Lions rugby union team has been playing since 1902. Two short-lived greyhound racing tracks were opened by the Lincolnshire Greyhound Racing Association. The first was the Highfield track in Hykeham Road, which opened on 13 September 1931, and the second at the Lincoln Speedway on the Rope Walk, which opened on 4 June 1932. Racing at both tracks was independent as they were "flapping" tracks not affiliated to the sport's governing body the National Greyhound Racing Club. Their dates of closure have not been found. In alphabetical order: Lincoln is twinned with:
https://en.wikipedia.org/wiki?curid=17880
Luftwaffe The Luftwaffe () was the aerial warfare branch of the "Wehrmacht" during World War II. Germany's military air arms during World War I, the "Luftstreitkräfte" of the Imperial Army and the "Marine-Fliegerabteilung" of the Imperial Navy, had been disbanded in May 1920 as a result of the terms of the Treaty of Versailles which stated that Germany was forbidden to have any air force. During the interwar period, German pilots were trained secretly in violation of the treaty at Lipetsk Air Base in the Soviet Union. With the rise of the Nazi Party and the repudiation of the Versailles Treaty, the "Luftwaffe"s existence was publicly acknowledged on 26 February 1935, just over two weeks before open defiance of the Versailles Treaty through German re-armament and conscription would be announced on 16 March. The Condor Legion, a "Luftwaffe" detachment sent to aid Nationalist forces in the Spanish Civil War, provided the force with a valuable testing ground for new tactics and aircraft. Partially as a result of this combat experience, the "Luftwaffe" had become one of the most sophisticated, technologically advanced, and battle-experienced air forces in the world when World War II broke out in 1939. By the summer of 1939, the "Luftwaffe" had twenty-eight "Geschwader" (wings). The "Luftwaffe" also operated "Fallschirmjäger" paratrooper units. The "Luftwaffe" proved instrumental in the German victories across Poland and Western Europe in 1939 and 1940. During the Battle of Britain, however, despite inflicting severe damage to the RAF's infrastructure and, during the subsequent Blitz, devastating many British cities, the German air force failed to batter the beleaguered British into submission. From 1942, Allied bombing campaigns gradually destroyed the "Luftwaffe"s fighter arm. From late 1942, the "Luftwaffe" used its surplus ground support and other personnel to raise "Luftwaffe" Field Divisions. In addition to its service in the West, the "Luftwaffe" operated over the Soviet Union, North Africa and Southern Europe. Despite its belated use of advanced turbojet and rocket propelled aircraft for the destruction of Allied bombers, the "Luftwaffe" was overwhelmed by the Allies' superior numbers and improved tactics, and a lack of trained pilots and aviation fuel. In January 1945, during the closing stages of the Battle of the Bulge, the "Luftwaffe" made a last-ditch effort to win air superiority, and met with failure. With rapidly dwindling supplies of petroleum, oil, and lubricants after this campaign, and as part of the entire combined "Wehrmacht" military forces as a whole, the "Luftwaffe" ceased to be an effective fighting force. After the defeat of Germany, the "Luftwaffe" was disbanded in 1946. During World War II, German pilots claimed roughly 70,000 aerial victories, while over 75,000 "Luftwaffe" aircraft were destroyed or significantly damaged. Of these, nearly 40,000 were lost entirely. The "Luftwaffe" had only two commanders-in-chief throughout its history: Hermann Göring and later "Generalfeldmarschall" Robert Ritter von Greim for the last two weeks of the war. The "Luftwaffe" was deeply involved in Nazi war crimes. By the end of the war, a significant percentage of aircraft production originated in concentration camps, an industry employing tens of thousands of prisoners. The "Luftwaffe"s demand for labor was one of the factors that led to the deportation and murder of hundreds of thousands of Hungarian Jews in 1944. The "Oberkommando der Luftwaffe" organized Nazi human experimentation, and "Luftwaffe" ground troops committed massacres in Italy, Greece, and Poland. The Imperial German Army Air Service was founded in 1910 with the name "Die Fliegertruppen des deutschen Kaiserreiches", most often shortened to "Fliegertruppe". It was renamed the "Luftstreitkräfte" on 8 October 1916. The air war on the Western Front received the most attention in the annals of the earliest accounts of military aviation, since it produced aces such as Manfred von Richthofen and Ernst Udet, Oswald Boelcke, and Max Immelmann. After the defeat of Germany, the service was dissolved on 8 May 1920 under the conditions of the Treaty of Versailles, which also mandated the destruction of all German military aircraft. Since the Treaty of Versailles forbade Germany to have an air force, German pilots trained in secret. Initially, civil aviation schools within Germany were used, yet only light trainers could be used in order to maintain the façade that the trainees were going to fly with civil airlines such as Deutsche Luft Hansa. To train its pilots on the latest combat aircraft, Germany solicited the help of the Soviet Union, which was also isolated in Europe. A secret training airfield was established at Lipetsk in 1924 and operated for approximately nine years using mostly Dutch and Soviet, but also some German, training aircraft before being closed in 1933. This base was officially known as 4th squadron of the 40th wing of the Red Army. Hundreds of "Luftwaffe" pilots and technical personnel visited, studied and were trained at Soviet air force schools in several locations in Central Russia. Roessing, Blume, Fosse, Teetsemann, Heini, Makratzki, Blumendaat, and many other future "Luftwaffe" aces were trained in Russia in joint Russian-German schools that were set up under the patronage of Ernst August Köstring. The first steps towards the "Luftwaffe"s formation were undertaken just months after Adolf Hitler came to power. Hermann Göring, a World War I ace, became National "Kommissar" for aviation with former Luft Hansa director Erhard Milch as his deputy. In April 1933 the Reich Aviation Ministry ("Reichsluftfahrtministerium" or RLM) was established. The RLM was in charge of development and production of aircraft. Göring's control over all aspects of aviation became absolute. On 25 March 1933 the German Air Sports Association absorbed all private and national organizations, while retaining its 'sports' title. On 15 May 1933, all military aviation organizations in the RLM were merged, forming the "Luftwaffe"; its official 'birthday'. The National Socialist Flyers Corps ("Nationalsozialistisches Fliegerkorps" or NSFK) was formed in 1937 to give pre-military flying training to male youths, and to engage adult sport aviators in the Nazi movement. Military-age members of the NSFK were drafted into the "Luftwaffe". As all such prior NSFK members were also Nazi Party members, this gave the new "Luftwaffe" a strong Nazi ideological base in contrast to the other branches of the "Wehrmacht" (the "Heer" (Army) and the "Kriegsmarine" (Navy)). Göring played a leading role in the buildup of the "Luftwaffe" in 1933–36, but had little further involvement in the development of the force after 1936, and Milch became the "de facto" minister until 1937. The absence of Göring in planning and production matters was fortunate. Göring had little knowledge of current aviation, had last flown in 1922, and had not kept himself informed of latest events. Göring also displayed a lack of understanding of doctrine and technical issues in aerial warfare which he left to others more competent. The Commander-in-Chief left the organisation and building of the "Luftwaffe", after 1936, to Erhard Milch. However Göring, as a part of Hitler's inner circle, provided access to financial resources and materiel for rearming and equipping the "Luftwaffe". Another prominent figure in German air power construction this time was Helmuth Wilberg. Wilberg later played a large role in the development of German air doctrine. Having headed the "Reichswehr" air staff for eight years in the 1920s, Wilberg had considerable experience and was ideal for a senior staff position. Göring considered making Wilberg Chief of Staff (CS). However, it was revealed Wilberg had a Jewish mother. For that reason Göring could not have him as CS. Not wishing his talent to go to waste, Göring ensured the racial laws of the Third Reich did not apply to him. Wilberg remained in the air staff, and under Walther Wever helped draw up the "Luftwaffe"s principle doctrinal texts, "The Conduct of the Aerial War" and "Regulation 16". The German officer Corps was keen to develop strategic bombing capabilities against its enemies. However, economic and geopolitical considerations had to take priority. The German air power theorists continued to develop strategic theories, but emphasis was given to army support, as Germany was a continental power and expected to face ground operations following any declaration of hostilities. For these reasons, between 1933 and 1934, the "Luftwaffe"s leadership was primarily concerned with tactical and operational methods. In aerial terms, the army concept of "Truppenführung" was an operational concept, as well as a tactical doctrine. In World War I, the "Fliegertruppe's" initial, 1914–15 era "Feldflieger Abteilung" observation/reconnaissance air units, each with six two-seater aircraft apiece, had been attached to specific army formations and acted as support. Dive bomber units were considered essential to "Truppenführung", attacking enemy headquarters and lines of communications. "Luftwaffe" "Regulation 10: The Bomber" ("Dienstvorschrift 10: Das Kampfflugzeug"), published in 1934, advocated air superiority and approaches to ground attack tactics without dealing with operational matters. Until 1935, the 1926 manual "Directives for the Conduct of the Operational Air War" continued to act as the main guide for German air operations. The manual directed OKL to focus on limited operations (not strategic operations): the protection of specific areas and support of the army in combat. With an effective tactical-operational concept, the German air power theorists needed a strategic doctrine and organisation. Robert Knauss, a serviceman (not pilot) in the "Luftstreitkräfte" during World War I, and later an experienced pilot with Lufthansa, was a prominent theorist of air power. Knauss promoted the Giulio Douhet theory that air power could win wars alone by destroying enemy industry and breaking enemy morale by "terrorizing the population" of major cities. This advocated attacks on civilians. The General Staff blocked the entry of Douhet's theory into doctrine, fearing revenge strikes against German civilians and cities. In December 1934, Chief of the "Luftwaffe" General Staff Walther Wever sought to mould the "Luftwaffe"s battle doctrine into a strategic plan. At this time, Wever conducted war games (simulated against France) in a bid to establish his theory of a strategic bombing force that would, he thought, prove decisive by winning the war through the destruction of enemy industry, even though these exercises also included tactical strikes against enemy ground forces and communications. In 1935, ""Luftwaffe" Regulation 16: The Conduct of the Air War" was drawn up. In the proposal, it concluded, "The mission of the "Luftwaffe" is to serve these goals." Corum states that under this doctrine, the "Luftwaffe" leadership rejected the practice of "terror bombing" (see "Luftwaffe" strategic bombing doctrine). According to Corum terror bombing was deemed to be "counter-productive", increasing rather than destroying the enemy's will to resist. Such bombing campaigns were regarded as diversion from the "Luftwaffe"s main operations; destruction of the enemy armed forces. Nevertheless, Wever recognised the importance of strategic bombing. In newly introduced doctrine, "The Conduct of the Aerial Air War" in 1935, Wever rejected the theory of Douhet and outlined five key points to air strategy: Wever began planning for a strategic bomber force and sought to incorporate strategic bombing into a war strategy. He believed that tactical aircraft should only be used as a step to developing a strategic air force. In May 1934, Wever initiated a seven-year project to develop the so-called "Ural bomber", which could strike as far as into the heart of the Soviet Union. In 1935, this design competition led to the Dornier Do 19 and Junkers Ju 89 prototypes, although both were underpowered. In April 1936, Wever issue requirements for the 'Bomber A' design competition: a range of 6,700 km (4,163 mi) with a 900 kg (1,984 lb) bomb load. However Wever's vision of a "Ural" bomber was never realised, and his emphasis on strategic aerial operations was lost. The only design submittal for Wever's 'Bomber A' that reached production was Heinkel's "Projekt 1041", which culminated in the production and frontline service as Germany's only operational heavy bomber, the Heinkel He 177, on 5 November 1937, the date on which it received its RLM airframe number. In 1935, the military functions of the RLM were grouped into "Oberkommando der Luftwaffe" (OKL; "Air Force High Command"). Following the untimely death of Walther Wever in early June 1936 in an aviation-related accident, by the late 1930s the "Luftwaffe" had no clear purpose. The air force was not subordinated to the army support role, and it was not given any particular strategic mission. German doctrine fell between the two concepts. The "Luftwaffe" was to be an organization capable of carrying out broad and general support tasks rather than any specific mission. Mainly, this path was chosen to encourage a more flexible use of air power and offer the ground forces the right conditions for a decisive victory. In fact, on the outbreak of war, only 15% of the "Luftwaffe"s aircraft were devoted to ground support operations, counter to the long-held myth that the "Luftwaffe" was designed for only tactical and operational missions. Wever's participation in the construction of the "Luftwaffe" came to an abrupt end on 3 June 1936 when he was killed along with his engineer in a Heinkel He 70 Blitz, ironically on the very day that his "Bomber A" heavy bomber design competition was announced. After Wever's death Göring began taking more of an interest in the appointment of "Luftwaffe" staff officers. Göring appointed his successor Albert Kesselring as Chief of Staff and Ernst Udet to head the Reich's Air Ministry Technical Office ("Technisches Amt"), although he was not a technical expert. Despite this Udet helped change the "Luftwaffe"s tactical direction towards fast medium bombers to destroy enemy air power in the battle zone rather than through industrial bombing of its aviation production. Kesselring and Udet did not get on. During Kesselring's time as CS, 1936–1937, a power struggle developed between the two as Udet attempted to extend his own power within the "Luftwaffe". Kesselring also had to contend with Göring appointing "yes men" to positions of importance. Udet realised his limitations, and his failures in the production and development of German aircraft would have serious long term consequences. The failure of the "Luftwaffe" to progress further towards attaining a strategic bombing force was attributable to several reasons. Many in the "Luftwaffe" command believed medium bombers to be sufficient power to launch strategic bombing operations against Germany's most likely enemies; France, Czechoslovakia, and Poland. The United Kingdom presented greater problems. "General der Flieger" Hellmuth Felmy, commander of "Luftflotte 2" in 1939, was charged with devising a plan for an air war over the British Isles. Felmy was convinced that Britain could be defeated through morale bombing. Felmy noted the alleged panic that had broken out in London during the Munich crisis, evidence he believed of British weakness. A second reason was technical. German designers had never solved the issues of the Heinkel He 177A's design difficulties, brought on by the requirement from its inception on 5 November 1937 to have moderate dive bombing capabilities in a 30-meter wingspan aircraft. Moreover, Germany did not possess the economic resources to match the later British and American effort of 1943–1944, particularly in large-scale mass production of high power output aircraft engines (with output of over least 1,500 kW (2,000 hp). In addition, OKL had not foreseen the industrial and military effort strategic bombing would require. By 1939 the "Luftwaffe" was not much better prepared than its enemies to conduct a strategic bombing campaign, with fatal results during the Battle of Britain. The German rearmament program faced difficulties acquiring raw materials. Germany imported most of its essential materials for rebuilding the "Luftwaffe", in particular rubber and aluminium. Petroleum imports were particularly vulnerable to blockade. Germany pushed for synthetic fuel plants, but still failed to meet demands. In 1937 Germany imported more fuel than it had at the start of the decade. By the summer 1938 only 25% of requirements could be covered. In steel materials, industry was operating at barely 83% of capacity, and by November 1938 Göring reported the economic situation was serious. The "Oberkommando der Wehrmacht" (OKW), the overall command for all German military forces, ordered reductions in raw materials and steel used for armament production. The figures for reduction were substantial: 30% steel, 20% copper, 47% aluminium, and 14% rubber. Under such circumstances, it was not possible for Milch, Udet, or Kesselring to produce a formidable strategic bombing force even had they wanted to do so. The development of aircraft was now confined to the production of twin-engined medium bombers that required much less material, manpower and aviation production capacity than Wever's "Ural Bomber". German industry could build two medium bombers for one heavy bomber and the RLM would not gamble on developing a heavy bomber which would also take time. Göring remarked, "the "Führer" will not ask how big the bombers there are, but only how many there are." The premature death of Wever, one of the "Luftwaffe"s finest officers, left the "Luftwaffe" without a strategic air force during World War II, which eventually proved fatal to the German war effort. The lack of strategic capability should have been apparent much earlier. The Sudeten Crisis highlighted German unpreparedness to conduct a strategic air war (although the British and French were in a much weaker position), and Hitler ordered the "Luftwaffe" be expanded to five times its earlier size. OKL badly neglected the need for transport aircraft; even in 1943, transport units were described as "Kampfgeschwadern zur besonderen Verwendung" (Bomber Units on Special Duties, KGzbV). and only grouping them together into dedicated cargo and personnel transport wings ("Transportgeschwader") during that year. In March 1938, as the "Anschluss" was taking place, Göring ordered Felmy to investigate the prospect of air raids against Britain. Felmy concluded it was not possible until bases in Belgium and the Netherlands were obtained and the "Luftwaffe" had heavy bombers. It mattered little, as war was avoided by the Munich Agreement, and the need for long-range aircraft did not arise. These failures were not exposed until wartime. In the meantime German designs of mid-1930s origin such as the Messerschmitt Bf 109, Heinkel He 111, Junkers Ju 87 Stuka, and Dornier Do 17, performed very well. All first saw active service in the Condor Legion against Soviet-supplied aircraft. The "Luftwaffe" also quickly realized the days of the biplane fighter were finished, the Heinkel He 51 being switched to service as a trainer. Particularly impressive were the Heinkel and Dornier, which fulfilled the "Luftwaffe"s requirements for bombers that were faster than 1930s-era fighters, many of which were biplanes or strut-braced monoplanes. Despite the participation of these aircraft (mainly from 1938 onward), it was the venerable Junkers Ju 52 (which soon became the backbone of the "Transportgruppen") that made the main contribution. During the Spanish Civil War Hitler remarked, "Franco ought to erect a monument to the glory of the Junkers Ju 52. It is the aircraft which the Spanish revolution has to thank for its victory." Poor accuracy from level bombers in 1937 led the "Luftwaffe" to grasp the benefits of dive-bombing. The latter could achieve far better accuracy against tactical ground targets than heavier conventional bombers. Range was not a key criterion for this mission. It was not always feasible for the army to move heavy artillery over recently captured territory to bombard fortifications or support ground forces, and dive bombers could do the job faster. Dive bombers, often single-engine two-man machines, could achieve better results than larger six or seven-man aircraft, at a tenth of the cost and four times the accuracy. This led to Udet championing the dive bomber, particularly the Junkers Ju 87. Udet's "love affair" with dive bombing seriously affected the long-term development of the "Luftwaffe", especially after General Wever's death. The tactical strike aircraft programs were meant to serve as interim solutions until the next generation of aircraft arrived. In 1936 the Junkers Ju 52 was the backbone of the German bomber fleet. This led to a rush on the part of the RLM to produce the Junkers Ju 86, Heinkel He 111, and Dornier Do 17 before a proper evaluation was made. The Ju 86 was poor while the He 111 showed most promise. The Spanish Civil War convinced Udet (along with limited output from the German munitions industry) that wastage was not acceptable in munition terms. Udet sought to build dive bombing into the Junkers Ju 88 and conveyed the same idea, initiated specifically by OKL for the Heinkel He 177, approved in early November 1937. In the case of the Ju 88, 50,000 modifications had to be made. The weight was increased from seven to twelve tons. This resulted in a speed loss of 200 km/h. Udet merely conveyed OKL's own dive bombing capability request to Ernst Heinkel concerning the He 177, who vehemently opposed such an idea, which ruined its development as a heavy bomber. Göring was not able to rescind the dive bombing requirement for the He 177A until September 1942. By the summer of 1939, the "Luftwaffe" had ready for combat nine "Jagdgeschwader" (fighter wings) mostly equipped with the Messerschmitt Bf 109E, four "'Zerstörergeschwader" (destroyer wings) equipped with the Messerschmitt Bf 110 heavy fighter, 11 "Kampfgeschwader" (bomber wings) equipped mainly with the Heinkel He 111 and the Dornier Do 17Z, and four "Sturzkampfgeschwader" (dive bomber wing") primarily armed with the iconic Junkers Ju 87B "Stuka". The "Luftwaffe" was just starting to accept the Junkers Ju 88A for service, as it had encountered design difficulties, with only a dozen aircraft of the type considered combat-ready. The "Luftwaffe"s strength at this time stood at 373,000 personnel (208,000 flying troops, 107,000 in the Flak Corps and 58,000 in the Signals Corps). Aircraft strength was 4,201 operational aircraft: 1,191 bombers, 361 dive bombers, 788 fighters, 431 heavy fighters, and 488 transports. Despite deficiencies it was an impressive force. However, even by the spring of 1940, the "Luftwaffe" still had not mobilized fully. Despite the shortage of raw-materials, "Generalluftzeugmeister" Ernst Udet had increased production through introducing a 10-hour working day for aviation industries and rationalizing production. During this period 30 "Kampfstaffeln" and 16 "Jagdstaffeln" were raised and equipped. A further five "Zerstörergruppen" ("Destroyer groups") were created (JGr 101, 102,126,152 and 176), all equipped with the Bf 110. The "Luftwaffe" also greatly expanded its aircrew training programs by 42%, to 63 flying schools. These facilities were moved to eastern Germany, away from possible Allied threats. The number of aircrew reached 4,727, an increase of 31%. However, the rush to complete this rapid expansion scheme resulted in the deaths of 997 personnel and another 700 wounded. 946 aircraft were also destroyed in these accidents. The number of aircrew completing their training was up to 3,941, The "Luftwaffe"s entire strength was now 2.2 million personnel. In April and May 1941, Udet headed the "Luftwaffe" delegation inspecting Soviet aviation industry in compliance with the Molotov–Ribbentrop Pact. Udet informed Göring "that Soviet air forces are very strong and technically advanced." Göring decided not to report the facts to Hitler, hoping that a surprise attack would quickly destroy the USSR. Udet realized that the upcoming war on Russia might cripple Germany. Udet, torn between truth and loyalty, suffered a psychological breakdown and even tried to tell Hitler the truth, but Göring told Hitler that Udet was lying, then took Udet under control by giving him drugs at drinking parties and hunting trips. Udet's drinking and psychological condition became a problem, but Göring used Udet's dependency to manipulate him. Throughout the history of Nazi Germany, the "Luftwaffe" had only two commanders-in-chief. The first was Hermann Göring, with the second and last being "Generalfeldmarschall" Robert Ritter von Greim. His appointment as commander-in-chief of the "Luftwaffe" was concomitant with his promotion to "Generalfeldmarschall", the last German officer in World War II to be promoted to the highest rank. Other officers promoted to the second highest military rank in Germany were Albert Kesselring, Hugo Sperrle, Erhard Milch, and Wolfram von Richthofen. At the end of the war, with Berlin surrounded by the Red Army, Göring suggested to Hitler that he take over leadership of the Reich. Hitler ordered his arrest and execution, but Göring's SS guards did not carry out the order, and Göring survived to be tried at Nuremberg. Sperrle was prosecuted at the OKW Trial, one of the last twelve of the Nuremberg Trials after the war. He was acquitted on all four counts. He died in Munich in 1953. At the start of the war the "Luftwaffe" had four "Luftflotten" (air fleets), each responsible for roughly a quarter of Germany. As the war progressed more air fleets were created as the areas under German rule expanded. As one example, "Luftflotte" 5 was created in 1940 to direct operations in Norway and Denmark, and other "Luftflotten" were created as necessary. Each "Luftflotte" would contain several "Fliegerkorps" (Air Corps), "Fliegerdivision" (Air Division), "Jagdkorps" (Fighter Corps),"Jagddivision" (Air Division) or "Jagdfliegerführer" (Fighter Air Command). Each formations would have attached to it a number of units, usually several "Geschwader", but also independent "Staffeln" and "Kampfgruppen". "Luftflotten" were also responsible for the training aircraft and schools in their operational areas. A "Geschwader" was commanded by a "Geschwaderkommodore", with the rank of either major, "Oberstleutnant" (lieutenant colonel) or "Oberst" (colonel). Other "staff" officers within the unit with administrative duties included the adjutant, technical officer, and operations officer, who were usually (though not always) experienced aircrew or pilots still flying on operations. Other specialist staff were navigation, signals, and intelligence personnel. A "Stabschwarm" (headquarters flight) was attached to each "Geschwader". A "Jagdgeschwader" (hunting wing) (JG) was a single-seat day fighter "Geschwader", typically equipped with Bf 109 or Fw 190 aircraft flying in the fighter or fighter-bomber roles. Late in the war, by 1944–45, JG 7 and JG 400 (and the jet specialist JV 44) flew much more advanced aircraft, with JG 1 working up with jets at war's end. A "Geschwader" consisted of groups ("Gruppen"), which in turn consisted of "Jagdstaffel" (fighter squadrons). Hence, Fighter Wing 1 was JG 1, its first "Gruppe" (group) was I./JG 1, using a Roman numeral for the "Gruppe" number only, and its first "Staffel" (squadron) was 1./JG 1. "Geschwader" strength was usually 120 – 125 aircraft. Each "Gruppe" was commanded by a "Kommandeur", and a "Staffel" by a "Staffelkapitän". However, these were "appointments", not ranks, within the "Luftwaffe". Usually, the "Kommodore" would hold the rank of "Oberstleutnant" (lieutenant colonel) or, exceptionally, an "Oberst" (colonel). Even a "Leutnant" (second lieutenant) could find himself commanding a "Staffel". Similarly, a bomber wing was a "Kampfgeschwader" (KG), a night fighter wing was a "Nachtjagdgeschwader" (NJG), a dive bomber wing was a "Stukageschwader" (StG), and units equivalent to those in RAF Coastal Command, with specific responsibilities for coastal patrols and search and rescue duties, were "Küstenfliegergruppen" (Kü.Fl. Gr.). Specialist bomber groups were known as "Kampfgruppen" (KGr). The strength of a bomber "Geschwader" was about 80–90 aircraft. The peacetime strength of the "Luftwaffe" in the spring of 1939 was 370,000 men. After the mobilization in 1939 almost 900,000 men served, and just before Operation Barbarossa in 1941 the personnel strength had reached 1.5 million men. The "Luftwaffe" reached its largest personnel strength during the period November 1943 to June 1944, with almost three million men and women in uniform; 1.7 million of these were male soldiers, 1 million male "Wehrmachtsbeamte" and civilian employees, and almost 300,000 female and male auxiliaries ("Luftwaffenhelfer"). In October 1944, the anti-aircraft units had 600,000 soldiers and 530,000 auxiliaries, including 60,000 male members of the "Reichsarbeitsdienst", 50,000 "Luftwaffenhelfer" (males age 15–17), 80,000 "Flakwehrmänner" (males above military age) and "Flak-V-soldaten" (males unfit for military service), and 160,000 female "Flakwaffenhelferinnen" and "RAD-Maiden", as well as 160,000 foreign personnel (Hiwis). The "Luftwaffe"s Condor Legion experimented with new doctrine and aircraft during the Spanish Civil War. It helped the "Falange" under Francisco Franco to defeat the Republican forces. Over 20,000 German airmen gained combat experience that would give the "Luftwaffe" an important advantage going into the Second World War. One infamous operation was the bombing of Guernica in the Basque country. It is commonly assumed this attack was the result of a "terror doctrine" in "Luftwaffe" doctrine. The raids on Guernica and Madrid caused many civilian casualties and a wave of protests in the democracies. It has been suggested that the bombing of Guernica was carried out for military tactical reasons, in support of ground operations, but the town was not directly involved in any fighting at that point in time. It was not until 1942 that the Germans started to develop bombing policy in which civilians were the primary target, although The Blitz on London and many other British cities involved indiscriminate bombing of civilian areas, 'nuisance raids' which could even involve the machine-gunning of civilians and livestock. When World War II began, the "Luftwaffe" was one of the most technologically advanced air forces in the world. During the Polish Campaign that triggered the war, it quickly established air superiority, and then air supremacy. It supported the German Army operations which ended the campaign in five weeks. The "Luftwaffe"s performance was as OKL had hoped. The "Luftwaffe" rendered invaluable support to the army, mopping up pockets of resistance. Göring was delighted with the performance. Command and control problems were experienced, but owing to the flexibility and improvisation of both the army and "Luftwaffe", these problems were solved. The "Luftwaffe" was to have in place a ground-to-air communication system, which played a vital role in the success of "Fall Gelb". In the spring of 1940, the "Luftwaffe" assisted the "Kriegsmarine" and "Heer" in the invasion of Norway. Flying in reinforcements and winning air superiority, the "Luftwaffe" contributed decisively to the German conquest. In the spring of 1940, the "Luftwaffe" contributed to the unexpected success in the Battle of France. It destroyed three Allied Air Forces and helped secure the defeat of France in just over six weeks. However, it could not destroy the British Expeditionary Force at Dunkirk despite intense bombing. The BEF escaped to continue the war. During the Battle of Britain in summer 1940, the "Luftwaffe" inflicted severe damage to the Royal Air Force, but did not achieve the air superiority that Hitler demanded for the proposed invasion of Britain, which was postponed and then cancelled in December 1940. The "Luftwaffe" ravaged British cities during The Blitz, but failed to break British morale. Hitler had already ordered preparations to be made for Operation Barbarossa, the invasion of the Soviet Union. In spring 1941, the "Luftwaffe" helped its Axis partner, Italy, secure victory in the Balkans Campaign and continued to support Italy in the Mediterranean, Middle East and African theatres until May 1945. In June 1941, Germany invaded the Soviet Union. The "Luftwaffe" destroyed thousands of Soviet aircraft, yet it failed to destroy the Red Air Force altogether. Lacking strategic bombers (the very "Ural bombers" that General Wever had asked for six years before) the "Luftwaffe" could not strike at Soviet production centers regularly or with the needed force. As the war dragged on, the "Luftwaffe" was eroded in strength. The defeats at the Battle of Stalingrad and Battle of Kursk ensured the gradual decline of the "Wehrmacht" on the Eastern Front. British historian Frederick Taylor asserts that "all sides bombed each other's cities during the war. Half a million Soviet citizens, for example, died from German bombing during the invasion and occupation of Russia. That's roughly equivalent to the number of German citizens who died from Allied raids." Meanwhile, the "Luftwaffe" continued to defend German-occupied Europe against the growing offensive power of RAF Bomber Command and, starting in the summer of 1942, the steadily building strength of the United States Army Air Forces. The mounting demands of the Defence of the Reich campaign gradually destroyed the "Luftwaffe"s fighter arm. Despite its belated use of advanced turbojet and rocket propelled aircraft for bomber destroyer duties, it was overwhelmed by Allied numbers and a lack of trained pilots and fuel. A last-ditch attempt, known as Operation Bodenplatte, to win air superiority on 1 January 1945 failed. After the "Bodenplatte" effort, the "Luftwaffe" ceased to be an effective fighting force. German day and night fighter pilots claimed more than 70,000 aerial victories during World War II. Of these, about 745 victories are estimated to be achieved by jet fighters. Flak shot down 25,000–30,000 Allied planes. Broken down on the different Allies, about 25,000 were American planes, about 20,000 British, 46,100 Soviet, 1,274 French, 375 Polish, and 81 Dutch as well as aircraft from other Allied nationalities. The highest scoring day fighter pilot was Erich Hartmann with 352 confirmed kills, all of them at the Eastern front against the Soviets. The leading aces in the west were Hans-Joachim Marseille with 158 kills against planes from the British Empire (RAF, RAAF, and SAAF) and Georg-Peter Eder with 56 kills of aircraft from the USAAF (of a total of 78). The most successful night fighter pilot was Heinz-Wolfgang Schnaufer, who is credited with 121 kills. 103 German fighter pilots shot down more than 100 enemy aircraft for a total of roughly 15,400 aerial victories. Roughly a further 360 pilots claimed between 40 and 100 aerial victories for round about 21,000 victories. Another 500 fighter pilots claimed between 20 and 40 victories for a total of 15,000 victories. It is relatively certain that 2,500 German fighter pilots attained ace status, having achieved at least five aerial victories. These achievements were honored with 453 German single and twin-engine (Messerschmitt Bf 110) day fighter pilots having received the Knight's Cross of the Iron Cross. 85 night fighter pilots, including 14 crew members, were awarded the Knight's Cross of the Iron Cross. Some Bomber pilots were also highly successful. The "Stuka" and "Schlachtflieger" pilot Hans-Ulrich Rudel flew 2,530 ground-attack missions and claimed the destruction of more than 519 tanks and a battleship, among others. He was the most highly decorated German serviceman of the Second World War. The Bomber pilot Hansgeorg Bätcher flew more than 658 combat missions destroying numerous ships and other targets. Losses on the other hand were high as well. The estimated total number of destroyed and damaged for the war totaled 76,875 aircraft. Of these, about 43,000 were lost in combat, the rest in operational accidents and during training. By type, losses totaled 21,452 fighters, 12,037 bombers, 15,428 trainers, 10,221 twin-engine fighters, 5,548 ground attack, 6,733 reconnaissance, and 6,141 transports. According to the General Staff of the "Wehrmacht" the losses of the flight personnel until February 1945 amounted to: total: 15,082 officers and 98,568 enlisted men According to official statistics, total "Luftwaffe" casualties, including ground personnel, amounted to 138,596 killed and 156,132 missing through 31 January 1945. The failure of the "Luftwaffe" in the Defence of the Reich campaign was a result of a number of factors. The "Luftwaffe" lacked an effective air defence system early in the war. Adolf Hitler's foreign policy had pushed Germany into war before these defences could be fully developed. The "Luftwaffe" was forced to improvise and construct its defences during the war. The daylight actions over German controlled territory were sparse in 1939–1940. The responsibility of the defence of German air space fell to the "Luftgaukommandos" (air district commands). The defence systems relied mostly on the "flak" arm. The defences were not coordinated and communication was poor. This lack of understanding between the flak and flying branches of the defence would plague the "Luftwaffe" throughout the war. Hitler in particular wanted the defence to rest on anti-aircraft artillery as it gave the civilian population a "psychological crutch" no matter how ineffective the weapons. Most of the battles fought by the "Luftwaffe" on the Western Front were against the RAF's "Circus" raids and the occasional daylight raid into German air space. This was a fortunate position since the "Luftwaffe"s strategy of focusing its striking power on one front started to unravel with the failure of the invasion of the Soviet Union. The "peripheral" strategy of the "Luftwaffe" between 1939 and 1940 had been to deploy its fighter defences at the edges of Axis occupied territory, with little protecting the inner depths. Moreover, the front line units in the West were complaining about the poor numbers and performance of aircraft. Units complained of lack of "Zerstörer" aircraft with all-weather capabilities and the "lack of climbing power of the Bf 109". The "Luftwaffe"s technical edge was slipping as the only formidable new aircraft in the German arsenal was the Focke-Wulf Fw 190. "Generalfeldmarschall" Erhard Milch was to assist Ernst Udet with aircraft production increases and introduction of more modern types of fighter aircraft. However, they explained at a meeting of the Reich Industrial Council on 18 September 1941 that the new next generation aircraft had failed to materialize, and production of obsolete types had to continue to meet the growing need for replacements. The buildup of the "Jagdwaffe" ("Fighter Force") was too rapid and its quality suffered. It was not put under a unified command until 1943, which also affected performance of the nine "Jagdgeschwader" fighter wings in existence in 1939. No further units were formed until 1942, and the years of 1940–1941 were wasted. OKL failed to construct a strategy; instead its command style was reactionary, and its measures not as effective without thorough planning. This was particularly apparent with the "Sturmböck" squadrons, formed to replace the increasingly ineffective twin-engined "Zerstörer" twin-engined heavy fighter wings as the primary defense against USAAF daylight raids. The "Sturmböcke" flew Fw 190A fighters armed with heavy 20 mm and 30 mm cannon to destroy heavy bombers, but this increased the weight and affected the performance of the Fw 190 at a time when the aircraft were meeting large numbers of equal if not superior Allied types. Daytime aerial defense against the USAAF's strongly defended heavy bomber forces, particularly the Eighth Air Force and the Fifteenth Air Force, had its successes through the calendar year of 1943. But at the start of 1944, Eighth AF commander Jimmy Doolittle made a major change in offensive fighter tactics, which defeated the "Luftwaffe"s day fighter force from that time onwards. Steadily increasing numbers of the superlative North American P-51 Mustang single-engine fighter, leading the USAAF's bombers into German airspace defeated first the Bf 110 "Zerstörer" wings, then the Fw 190A Sturmböcke. In terms of technological development, the failure to develop a long-range bomber and capable long-range fighters during this period left the "Luftwaffe" unable to conduct a meaningful strategic bombing campaign throughout the war. However, Germany at that time suffered from limitations in raw materials such as oil and aluminium, which meant that there were insufficient resources for much beyond a tactical air force: given these circumstances, the "Luftwaffe"s reliance on tactical mid-range, twin engined medium bombers and short range dive-bombers was a pragmatic choice of strategy. It might also be argued that the "Luftwaffe"s "Kampfgeschwader" medium and heavy bomber wings were perfectly capable of attacking strategic targets, but the lack of capable long range escort fighters left the bombers unable to carry out their missions effectively against determined and well organised fighter opposition. The greatest failure for the "Kampfgeschwader", however, was being saddled with an aircraft intended as a capable four-engined heavy bomber: the perpetually troubled Heinkel He 177, whose engines were prone to catch fire in flight. Of the three parallel proposals from the Heinkel engineering departments for a four engined version of the A-series He 177 by February 1943, one of these being the Heinkel firm's "Amerikabomber" candidate, only one, the , emerged in the concluding months of 1943. Only three airworthy prototypes of the B-series He 177 design were produced by early 1944, some three years after the first prototype flights of the Avro Lancaster, the most successful RAF heavy bomber. Another failure of procurement and equipment was the lack of a dedicated naval air arm. General Felmy had already expressed a desire to build a naval air arm to support "Kriegsmarine" operations in the Atlantic and British waters. Britain was dependent on food and raw materials from its Empire and North America. Felmy pressed this case firmly throughout 1938 and 1939, and, on 31 October 1939, "Großadmiral" Erich Raeder sent a strongly worded letter to Göring in support of such proposals. The early-war twin-engined Heinkel He 115 floatplane and Dornier Do 18 flying boat were too slow and short-ranged. The then-contemporary Blohm & Voss BV 138 "Seedrache" (seadragon) trimotor flying boat became the "Luftwaffe"s primary seaborne maritime patrol platform, with nearly 300 examples built; its trio of Junkers Jumo 205 diesel engines gave it a 4,300 km (2,670 mi) maximum range. Another Blohm und Voss design of 1940, the enormous, 46-meter wingspan six-engined Blohm und Voss BV 222 "Wiking" maritime patrol flying boat, would see it capable of a 6,800 km (4,200-mile) range at maximum endurance when using higher-output versions of the same Jumo 205 powerplants as used by the BV 138, in later years. The Dornier Do 217 would have been ideal as a land-based choice, but suffered production problems. Raeder also complained about the poor standard of aerial torpedoes, although their design was the responsibility of the "Wehrmacht" combined military's naval arm (the "Kriegsmarine"), even considering production of the Japanese Type 91 torpedo used at Pearl Harbor as the "Lufttorpedo" LT 850 by August 1942. (See both:Yanagi missions and Heinkel He 111 torpedo bomber operations) Without specialised naval or land-based, purpose-designed maritime patrol aircraft, the "Luftwaffe" was forced to improvise. The Focke-Wulf Fw 200 Condor airliner's airframe – engineered for civilian airliner use – lacked the structural strength for combat maneuvering at lower altitudes, making it unsuitable for use as a bomber in maritime patrol duties. The Condor lacked speed, armour and bomb load capacity. Sometimes the fuselage literally "broke its back" or a wing panel dropped loose from the wing root after a hard landing. Nevertheless, this civilian transport was adapted for the long-range reconnaissance and anti-shipping roles and, between August 1940 and February 1941, Fw 200s sank 85 vessels for a claimed total of 363,000 Grt. Had the "Luftwaffe" focused on naval aviation – particularly maritime patrol aircraft with long range, like the aforementioned diesel-powered multi-engine Blohm & Voss flying boats – Germany might well have been in a position to win the Battle of the Atlantic. However, Raeder and the "Kriegsmarine" failed to press for naval air power until the war began, mitigating the "Luftwaffe"s responsibility. In addition, Göring regarded any other branch of the German military developing its own aviation as an encroachment on his authority and continually frustrated the Navy's attempts to build its own airpower. The absence of a strategic bomber force for the "Luftwaffe", following General Wever's accidental death in the early summer of 1936 and the end of the Ural bomber program he fostered before the invasion of Poland, would not be addressed again until the authorization of the "Bomber B" design competition in July 1939, which sought to replace the medium bomber force with which the "Luftwaffe" was to begin the war, and the partly achieved "Schnellbomber" high-speed medium bomber concept with more advanced, twin-engined high speed bomber aircraft fitted with pairs of relatively "high-power" engines of 1,500 kW (2,000 hp) output levels and upwards each as a follow-on to the earlier "Schnellbomber" project, that would also be able to function as shorter range heavy bombers. The spring 1942 "Amerika Bomber" program also sought to produce useful strategic bomber designs for the "Luftwaffe", with their prime design priority being an advanced trans-oceanic range capability as the main aim of the project to directly attack the United States from Europe or the Azores. Inevitably, both the Bomber B and Amerika Bomber programs were victims of the continued emphasis of the "Wehrmacht" combined military's insistence for its "Luftwaffe" air arm to support the "Heer" as its primary mission, and the damage to the German aviation industry from Allied bomber attacks. The RLM's apparent lack of a dedicated "technical-tactical" department, that would have directly been in contact with combat pilots to assess their needs for weaponry upgrades and tactical advice, had never been seriously envisioned as a critically ongoing necessity in the planning of the original German air arm. The RLM did have its own "Technisches Amt" (T-Amt) department to handle aviation technology issues, but this was tasked with handling all aviation technology issues in the Third Reich, both military and civilian in nature, and also not known to have ever had any clear and actively administrative and consultative links with the front-line forces established for such purposes. On the front-line combat side of the issue, and for direct contact with the German aviation firms making the "Luftwaffe"s warplanes, the "Luftwaffe" did have its own reasonably effective system of four military aviation test facilities, or "Erprobungstellen" located at three coastal sites – Peenemünde-West (also incorporating a separate facility in nearby Karlshagen), Tarnewitz and Travemünde – and the central inland site of Rechlin, itself first established as a military airfield in late August 1918 by the German Empire, with the four-facility system commanded later in World War II by "Oberst" (Colonel) Edgar Petersen. However, due to lack of co-ordination between the RLM and OKL, all fighter and bomber development was oriented toward short range aircraft, as they could be produced in greater numbers, rather than quality long range aircraft, something that put the "Luftwaffe" at a disadvantage as early as the Battle of Britain. The "ramp-up" to production levels required to fulfill the "Luftwaffe"s front-line needs was also slow, not reaching maximum output until 1944. Production of fighters was not given priority until 1944; Adolf Galland commented that this should have occurred at least a year earlier. Galland also pointed to the mistakes and challenges made in the development of the Messerschmitt Me 262 jet – which included the protracted development time required for its Junkers Jumo 004 jet engines to achieve reliability. German combat aircraft types that were first designed and flown in the mid-1930s had become obsolete, yet were kept in production, in particular the Ju 87 Stuka, and the Bf 109, because there were no well-developed replacement designs. The failure of German production was evident from the start of the Battle of Britain. By the end of 1940 the "Luftwaffe" had suffered heavy losses and needed to regroup. Deliveries of new aircraft were insufficient to meet the drain on resources; the "Luftwaffe", unlike the RAF, was failing to expand its pilot and aircraft numbers. This was partly owing to production planning failures before the war and the demands of the army. Nevertheless, the German aircraft industry was being outproduced in 1940. In terms of fighter aircraft production, the British exceeded their production plans by 43%, while the Germans remained 40% "behind" target by the summer 1940. In fact German production in fighters fell from 227 to 177 per month between July and September 1940. One of the many reasons for the failure of the "Luftwaffe" in 1940 was that it did not have the operational and material means to destroy the British aircraft industry, something that the much-anticipated "Bomber B" design competition was intended to address. The so-called "Göring program", had largely been predicated on the defeat of the Soviet Union in 1941. After the "Wehrmacht"'s failure in front of Moscow, industrial priorities for a possibility in increasing aircraft production were largely abandoned in favor to support the army's increased attrition rates and heavy equipment losses. Erhard Milch's reforms expanded production rates. In 1941 an average of 981 aircraft (including 311 fighters) were produced each month. In 1942 this rose to 1,296 aircraft of which 434 were fighters. Milch's planned production increases were initially opposed. But in June, he was granted materials for 900 fighters per month as the average output. By the Summer of 1942, "Luftwaffe's" operational fighter force had recovered from a low of 39% (44% for fighters and 31% for bombers) in Winter of 1941–1942, to 69% by late June (75% for fighters and 66% for bombers) in 1942. However, after increased commitments in the east, overall operational ready rates fluctuated between 59% and 65% for the remaining year. Throughout 1942 the "Luftwaffe" was out produced in fighter aircraft by 250% and in twin-engine aircraft by 196%. The appointment of Albert Speer as Minister of Armaments increased production of existing designs, and the few new designs that had originated from earlier in the war. However the intensification of Allied bombing caused the dispersion of production and prevented an efficient acceleration of expansion. German aviation production reached about 36,000 combat aircraft for 1944. However, by the time this was achieved the "Luftwaffe" lacked the fuel and trained pilots to make this achievement worth while. The failure to maximize production immediately after the failures in the Soviet Union and North Africa ensured the "Luftwaffe"s effective defeat in the period of September 1943 – February 1944. Despite the tactical victories won, they failed to achieve a decisive victory. By the time production reached acceptable levels, as so many other factors had for the "Luftwaffe" – and for the entire "Wehrmacht"'s weapons and ordnance technology as a whole – late in the war, it was "too little, too late". By the late 1930s, airframe construction methods had progressed to the point where airframes could be built to any required size, founded on the all-metal airframe design technologies pioneered by Hugo Junkers in 1915 and constantly improved upon for over two decades to follow – especially in Germany with aircraft like the Dornier Do X flying boat and the Junkers G 38 airliner. However, powering such designs was a major challenge. Mid-1930s aero engines were limited to about 600 hp and the first 1000 hp engines were just entering the prototype stage – for the then-new Third Reich's "Luftwaffe" air arm, this meant liquid-cooled inverted V12 designs like the Daimler-Benz DB 601. The United States had already gotten its start towards this goal by 1937 with two large displacement, twin-row 18-cylinder air-cooled radial engine designs of at least 46 litres (2,800 in) displacement each: the Pratt & Whitney "Double Wasp" and the Wright "Duplex-Cyclone". Nazi Germany's initial need for substantially more powerful aviation engines originated with the private venture Heinkel He 119 high-speed reconnaissance design, and the ostensibly twin-"engined" Messerschmitt Me 261 for maritime reconnaissance duties – to power each of these designs, Daimler-Benz literally "doubled-up" their new, fuel-injected DB 601 engines. This "doubling-up" involved placing two DB 601s side-by-side on either side of a common vertical-plane space frame with their crankcases' outer sides each having a mount similar to what would be used in a single-engine installation, creating a "mirror-image" centrifugal supercharger for the starboard-side component DB 601, inclining the top ends of their crankcases inwards by roughly 30º to mate with the space-frame central mount, and placing a common propeller gear reduction housing across the front ends of the two engines. Such a twin-crankcased "power system" aviation engine crafted from a pair of DB 601s resulted in the 2,700 PS (1,986 kW) maximum output DB 606 "coupled" engine design for these two aircraft in February 1937, but with each of the DB 606 "coupled" engines weighing in at around 1.5 tonnes apiece. The early development of the DB 606 "coupled" engines, was paralleled during the late 1930s with Daimler-Benz's simultaneous development of a 1,500 kW class engine design using a single crankcase. The result was the twenty-four cylinder Daimler-Benz DB 604 X-configuration engine, with four banks of six cylinders each. Possessing essentially the same displacement of 46.5 litres (2830 in3) as the initial version of the liquid-cooled Junkers Jumo 222 multibank engine, itself a "converse" choice in configuration to the DB 604 in possessing six banks of four inline cylinders apiece instead; coincidentally, both the original Jumo 222 design and the DB 604 each weighed about a third less (at some 1,080 kg/2,379 lb of dry weight) than the DB 606, but the DB 604's protracted development was diverting valuable German aviation powerplant research resources, and with more development of the "twinned-DB 605" based DB 610 coupled engine (itself initiated in June 1940 with top output level of 2950 PS (2,909 hp), and brought together in the same way – with the same all-up weight of 1.5 tonnes – as the DB 606 had been) giving improved results at the time, the Reich Air Ministry stopped all work on the DB 604 in September 1942. Such "coupled powerplants" were the exclusive choice of power for the Heinkel He 177A "Greif" heavy bomber, mistasked from its beginnings in being intended to do moderate-angle "dive bombing" for a 30-meter wingspan class, heavy bomber design – the twin nacelles for a pair of DB 606s or 610s did reduce drag for such a combat "requirement", but the poor design of the He 177A's engine accommodations for these twin-crankcase "power systems" caused repeated outbreaks of engine fires, causing the "dive bombing" requirement for the He 177A to be cancelled by mid-September 1942. BMW worked on what was essentially an enlarged version of its highly successful BMW 801 design from the Focke-Wulf Fw 190A. This led to the 53.7-litre displacement BMW 802 in 1943, an eighteen-cylinder air-cooled radial, which nearly matched the American "Duplex-Cyclone's" 54.9-litre figure, but with a weight of some matching that of the 24-cylinder liquid-cooled inline DB 606; and the even larger, 83.5-litre displacement BMW 803 28-cylinder liquid-cooled radial, which from post-war statements from BMW development personnel were each considered to be "secondary priority" development programs at best. This situation with the 802 and 803 designs led to the company's engineering personnel being redirected to place all efforts on improving the 801 to develop it to its full potential. The BMW 801F radial development, through its use of features coming from the 801E subtype, was able to substantially exceed the over-1,500 kW output level. The two closest Allied equivalents to the 801 in configuration and displacement – the American Wright "Twin Cyclone", and the Soviet Shvetsov ASh-82 radials – never had any need to be developed beyond a 1,500 kW output level, as larger-displacement, 18-cylinder radial aviation engines in both nations (the aforementioned American "Double Wasp" and "Duplex-Cyclone") and the eventual 1945 premiere of the Soviet Shvetsov ASh-73 design, all three of which started their development before 1940, handled needs for even greater power from large radial aviation engines. The twinned-up Daimler-Benz DB 601-based, 1,750 kW output DB 606, and its more powerful descendant, the 2,130 kW output DB 605-based DB 610, weighing some 1.5 tonnes apiece, were the only 1,500 kW-plus output level aircraft powerplants to ever be produced by Germany for "Luftwaffe" combat aircraft, mostly for the aforementioned Heinkel He 177A heavy bomber. Even the largest-displacement inverted V12 aircraft powerplant built in Germany, the 44.52-litre (2,717 cu. in.) Daimler-Benz DB 603, which saw widespread use in twin-engined designs, could not exceed the 1,500 kW output level without more development. By March 1940, even the DB 603 was being "twinned-up" as the 601/606 and 605/610 had been, to become their replacement "power system": this was the strictly experimental, approximately 1.8-tonne weight apiece, twin-crankcase DB 613; capable of over 2,570 kW (3,495 PS) output, but which never left its test phase. The proposed over-1,500 kW output subtypes of German aviation industry's existing piston aviation engine designs—which adhered to using just a single crankcase that "were" able to substantially exceed the aforementioned over-1,500 kW output level—were the DB 603 LM (1,800 kW at take-off, in production), the DB 603 N (2,205 kW at take-off, planned for 1946) and the BMW 801F (1,765 kW (2,400 PS) engines. The pioneering nature of jet engine technology in the 1940s resulted in numerous development problems for both of Germany's major jet engine designs to see mass production, the Jumo 004 and BMW 003 (both of pioneering axial flow design), with the more powerful Heinkel HeS 011 never leaving the test phase, as only 19 examples of the HeS 011 would ever be built for development. Even with such dismal degrees of success for such advanced aviation powerplant designs, more and more design proposals for new German combat aircraft in the 1943–45 period centered either around the failed Jumo 222 or HeS 011 aviation powerplants for their propulsion. The bomber arm was given preference and received the "better" pilots. Later, fighter pilot leaders were few in numbers as a result of this. As with the late shift to fighter production, the "Luftwaffe" pilot schools did not give the fighter pilot schools preference soon enough. The "Luftwaffe", OKW argued, was still an offensive weapon, and its primary focus was on producing bomber pilots. This attitude prevailed until the second half of 1943. During the Defence of the Reich campaign in 1943 and 1944, there were not enough commissioned fighter pilots and leaders to meet attrition rates; as the need arose to replace aircrew (as attrition rates increased), the quality of pilot training deteriorated rapidly. Later this was made worse by fuel shortages for pilot training. Overall this meant reduced training on operational types, formation flying, gunnery training, and combat training, and a total lack of instrument training. At the beginning of the war commanders were replaced with younger commanders too quickly. These younger commanders had to learn "in the field" rather than entering a post fully qualified. Training of formation leaders was not systematic until 1943, which was far too late, with the "Luftwaffe" already stretched. The "Luftwaffe" thus lacked a cadre of staff officers to set up, man, and pass on experience. Moreover, "Luftwaffe" leadership from the start poached the training command, which undermined its ability to replace losses, while also planning for "short sharp campaigns", which did not pertain. Moreover, no plans were laid for night fighters. In fact, when protests were raised, Hans Jeschonnek, Chief of the General Staff of the "Luftwaffe", said, "First we've got to beat Russia, then we can start training!" One of the unique characteristics of the "Luftwaffe" (as opposed to other independent air forces) was the possession of an organic paratrooper force called "Fallschirmjäger". Established in 1938, they saw action in their proper role during 1940–1941, most notably in the capture of the Belgian Army fortress at the Battle of Fort Eben-Emael and the Battle for The Hague in May 1940, and during the Battle of Crete in May 1941. However, more than 4,000 "Fallschirmjäger" were killed during the Crete operation. Afterwards, although continuing to be trained in parachute delivery, paratroopers were only used in a parachute role for smaller-scale operations, such as the rescue of Benito Mussolini in 1943. "Fallschirmjäger" formations were mainly used as crack foot infantry in all theatres of the war. Their losses were 22,041 KIA, 57,594 WIA and 44,785 MIA (until February 1945). During 1942 surplus "Luftwaffe" personnel (see above) was used to form the "Luftwaffe" Field Divisions, standard infantry divisions that were used chiefly as rear echelon units to free up front line troops. From 1943, the "Luftwaffe" also had an armoured paratroop division called "Fallschirm-Panzer" Division 1 Hermann Göring, which was expanded to a "Panzerkorps" in 1944. Ground support and combat units from the "Reichsarbeitsdienst" (RAD) and the National Socialist Motor Corps (NSKK) were also put at the "Luftwaffe"s disposal during the war. In 1942 56 RAD companies served with the "Luftwaffe" in the West as airfield construction troops. In 1943 420 RAD companies were trained as anti-aircraft artillery (AAA) and posted to existing "Luftwaffe" AAA battalions in the homeland. At the end of the war, these units were also fighting allied tanks. Beginning in 1939 with a transport regiment, the NSKK had in 1942 a complete division sized transportation unit serving the "Luftwaffe", the "NSKK Transportgruppe Luftwaffe" serving in France and at the Eastern front. The overwhelming number of its 12,000 members were Belgian, Dutch and French citizens. In 1943 and 1944, aircraft production was moved to concentration camps in order to alleviate labor shortages and to protect production from Allied air raids. The two largest aircraft factories in Germany were located at Mauthausen-Gusen and Mittelbau-Dora concentration camps. Aircraft parts were also manufactured at Flossenbürg, Buchenwald, Dachau, Ravensbrück, Gross-Rosen, Natzweiler, Herzogenbusch, and Neuengamme. In 1944 and 1945, as many as 90,000 concentration prisoners worked in the aviation industry, and were about one tenth of the concentration camp population over the winter of 1944–45. Partly in response to the "Luftwaffe"s demand for more forced laborers to increase fighter production, the concentration camp more than doubled between mid-1943 (224,000) and mid-1944 (524,000). Part of this increase was due to the deportation of the Hungarian Jews; the "Jägerstab" program was used to justify the deportations to the Hungarian government. Of the 437,000 Hungarian Jews deported between May and July 1944, about 320,000 were gassed on arrival at Auschwitz and the remainder forced to work. Only 50,000 survived. Almost 1,000 fuselages of the jet fighter Messerschmitt Me 262 were produced at Gusen, a subcamp of Mauthausen and brutal Nazi labor camp, where the average life expectancy was six months. By 1944, one-third of production at the crucial Regensburg plant that produced the Bf 109, the backbone of the "Luftwaffe" fighter arm, originated in Gusen and Flossenbürg alone. Synthetic oil was produced from shale oil deposits by prisoners of Mittlebau-Dora as part of Operation Desert directed by Edmund Geilenberg in order to make up for the decrease in oil production due to Allied bombing. For oil production, three subcamps were constructed and 15,000 prisoners forced to work in the plant. More than 3,500 people died. Vaivara concentration camp in Estonia was also established for shale oil extraction; about 20,000 prisoners worked there and more than 1,500 died at Vaivara. "Luftwaffe" airfields were frequently maintained using forced labor. Thousands of inmates from five subcamps of Stutthof worked on the airfields. Airfields and bases near several other concentration camps and ghettos were constructed or maintained by prisoners. On the orders of the "Luftwaffe", prisoners from Buchenwald and Herzogenbusch were forced to defuse bombs that had fallen around Düsseldorf and Leeuwarden respectively. Thousands of "Luftwaffe" personnel worked as concentration camp guards. Auschwitz included a munitions factory guarded by "Luftwaffe" soldiers; 2,700 "Luftwaffe" personnel worked as guards at Buchenwald. Dozens of camps and subcamps were staffed primarily by "Luftwaffe" soldiers. According to the "Encyclopedia of Camps and Ghettos", it was typical for camps devoted to armaments production to be run by the branch of the "Wehrmacht" that used the products. In 1944, many "Luftwaffe" soldiers were transferred to concentration camps to alleviate personnel shortages. "Luftwaffe" paratroopers committed many war crimes in Crete following the Battle of Crete, including the Alikianos executions, Massacre of Kondomari, and the Razing of Kandanos. Several "Luftwaffe" divisions, including the 1st Parachute Division, 2nd Parachute Division, 4th Parachute Division, 19th Luftwaffe Field Division, 20th Luftwaffe Field Division and the 1st "Fallschirm-Panzer" Division, committed war crimes in Italy, murdering hundreds of civilians. "Luftwaffe" troops participated in the murder of Jews imprisoned in ghettos in Eastern Europe, for example assisting in the murder of 2,680 Jews at the Nemirov ghetto, participating in a series of massacres at the Opoczno ghetto, and helping to liquidate the Dęblin–Irena Ghetto by deporting thousands of Jews to the Treblinka extermination camp. Between 1942 and 1944, two "Luftwaffe" security battalions were stationed in the Białowieża Forest for "Bandenbekämpfung" operations. Encouraged by Göring, they murdered thousands of Jews and other civilians. "Luftwaffe" soldiers frequently executed Polish civilians at random with baseless accuastions of being "Bolshevik agents", in order to keep the population in line, or as reprisal for partisan activities. The performance of the troops was measured by the body count of people murdered. Ten thousand "Luftwaffe" troops were stationed on the Eastern Front for such "anti-partisan" operations. Throughout the war, concentration camp prisoners were forced to serve as human guinea pigs in testing "Luftwaffe" equipment. Some were carried out by "Luftwaffe" personnel and others were performed by the SS on the orders of the OKL. In 1941, experiments with the intent of discovering means to prevent and treat hypothermia were carried out for the "Luftwaffe", which had lost aircrew to immersion hypothermia after ditchings. The experiments were conducted at Dachau and Auschwitz. Sigmund Rascher, a "Luftwaffe" doctor based at Dachau, published the results at the 1942 medical conference entitled "Medical Problems Arising from Sea and Winter". Of about 400 prisoners forced to participate in cold-water experiments, 80 to 90 were killed. In early 1942, prisoners at Dachau were used by Rascher in experiments to perfect ejection seats at high altitudes. A low-pressure chamber containing these prisoners was used to simulate conditions at altitudes of up to . It was rumoured that Rascher performed vivisections on the brains of victims who survived the initial experiment. Of the 200 subjects, 80 died from the experimentation, and the others were executed. Eugen Hagen, head doctor of the "Luftwaffe", infected inmates of Natzweiler concentration camp with typhus in order to test the efficacy of proposed vaccines. No positive or specific customary international humanitarian law with respect to aerial warfare existed prior to or during World War II. This is also why no "Luftwaffe" officers were prosecuted at the post-World War II Allied war crime trials for the aerial raids. The bombing of Wieluń was an air raid on the Polish town of Wieluń by the "Luftwaffe" on 1 September 1939. The "Luftwaffe" started bombing Wieluń at 04:40, five minutes before the shelling of Westerplatte, which has traditionally been considered the beginning of World War II in Europe. The air raid on the town was one of the first aerial bombings of the war. About 1,300 civilians were killed, hundreds injured, and 90 percent of the town centre was destroyed. The casualty rate was more than twice as high as Guernica. A 1989 Sender Freies Berlin documentary stated that there were no military or industrial targets in the area, except for a small sugar factory in the outskirts of the town. Furthermore, Trenkner stated that German bombers first destroyed the town's hospital. Two attempts, in 1978 and 1983, to prosecute individuals for the bombing of the Wieluń hospital were dismissed by West German judges when prosecutors stated that the pilots had been unable to make out the nature of the structure due to fog. Operation Retribution was the April 1941 German bombing of Belgrade, the capital of the Kingdom of Yugoslavia. The bombing deliberately targeted the killing of civilians as punishment, and resulted in 17,000 civilian deaths. It occurred in the first days of the World War II German-led Axis invasion of Yugoslavia. The operation commenced on 6 April and concluded on 7 or 8 April, resulting in the paralysis of Yugoslav civilian and military command and control, widespread destruction in the centre of the city and many civilian casualties. Following the Yugoslav capitulation, "Luftwaffe" engineers conducted a bomb damage assessment in Belgrade. The report stated that of bombs were dropped, with 10 to 14 percent being incendiaries. It listed all the targets of the bombing, which included: the royal palace, the war ministry, military headquarters, the central post office, the telegraph office, passenger and goods railway stations, power stations and barracks. It also mentioned that seven aerial mines were dropped, and that areas in the centre and northwest of the city had been destroyed, comprising 20 to 25 percent of its total area. Some aspects of the bombing remain unexplained, particularly the use of the aerial mines. In contrast, Pavlowitch states that almost 50 percent of housing in Belgrade was destroyed. After the invasion, the Germans forced between 3,500 and 4,000 Jews to collect rubble that was caused by the bombing. Several prominent "Luftwaffe" commanders were convicted of war crimes, including General Alexander Löhr and Field Marshal Albert Kesselring.
https://en.wikipedia.org/wiki?curid=17885
Lassa fever Lassa fever, also known as Lassa hemorrhagic fever (LHF), is a type of viral hemorrhagic fever caused by the Lassa virus. Many of those infected by the virus do not develop symptoms. When symptoms occur they typically include fever, weakness, headaches, vomiting, and muscle pains. Less commonly there may be bleeding from the mouth or gastrointestinal tract. The risk of death once infected is about one percent and frequently occurs within two weeks of the onset of symptoms. Among those who survive about a quarter have hearing loss, which improves within three months in about half of these cases. The disease is usually initially spread to people via contact with the urine or feces of an infected multimammate mouse. Spread can then occur via direct contact between people. Diagnosis based on symptoms is difficult. Confirmation is by laboratory testing to detect the virus's RNA, antibodies for the virus, or the virus itself in cell culture. Other conditions that may present similarly include Ebola, malaria, typhoid fever, and yellow fever. The Lassa virus is a member of the "Arenaviridae" family of viruses. There is no vaccine. Prevention requires isolating those who are infected and decreasing contact with the mice. Other efforts to control the spread of disease include having a cat to hunt vermin, and storing food in sealed containers. Treatment is directed at addressing dehydration and improving symptoms. The antiviral medication ribavirin has been recommended, but evidence to support its use is weak. Descriptions of the disease date from the 1950s. The virus was first described in 1969 from a case in the town of Lassa, in Borno State, Nigeria. Lassa fever is relatively common in West Africa including the countries of Nigeria, Liberia, Sierra Leone, Guinea, and Ghana. There are about 300,000 to 500,000 cases which result in 5,000 deaths a year. Onset of symptoms is typically 7 to 21 days after exposure. In 80% of those who are infected little or no symptoms occur. These mild symptoms may include fever, tiredness, weakness, and headache. In 20% of people more severe symptoms such as bleeding gums, breathing problems, vomiting, chest pain, or dangerously low blood pressure may occur. Long term complications may include hearing loss. In those who are pregnant, miscarriage may occur in 95%. In cases in which death occurs, this typically occurs within 14 days of onset. About 1% of all Lassa virus infections result in death. Approximately 15%-20% of those who have required hospitalization for Lassa fever die. The risk of death is greater in those who are pregnant. A "Swollen baby syndrome" may occur in newborns, infants and toddlers with pitting edema, abdominal distension and bleeding. Lassa virus is a member of the Arenaviridae, a family of negative-sense, single-stranded RNA viruses. Specifically it is an old world arenavirus, which is enveloped, single-stranded, and bi-segmented RNA. This virus has a both a large and a small genome section, with four lineages identified to date: Josiah (Sierra Leone), GA391 (Nigeria), LP (Nigeria) and strain AV. Lassa virus commonly spreads to humans from other animals, specifically the "natal multimammate mouse" or African rat, also called the natal multimammate rat ("Mastomys natalensis"). This is probably the most common mouse in equatorial Africa, common in human households and eaten as a delicacy in some areas. The multimammate mouse can quickly produce a large number of offspring, tends to colonize human settlements increasing the risk of rodent-human contact, and is found throughout the west, central and eastern parts of the African continent. Once the mouse has become a carrier, it will excrete the virus throughout the rest of its lifetime through feces and urine creating ample opportunity for exposure. The virus is probably transmitted by contact with the feces or urine of animals accessing grain stores in residences. No study has proven presence in breast milk, but the high level of viremia suggests it may be possible. Individuals who are at a higher risk of contracting the infection are those who live in rural areas where Mastomys are discovered, and where sanitation is not prevalent. Infection typically occurs by direct or indirect exposure to animal excrement through the respiratory or gastrointestinal tracts. Inhalation of tiny particles of infectious material (aerosol) is believed to be the most significant means of exposure. It is possible to acquire the infection through broken skin or mucous membranes that are directly exposed to infectious material. Transmission from person to person has been established, presenting a disease risk for healthcare workers. The virus is present in urine for between three and nine weeks after infection, and it can be transmitted in semen for up to three months after becoming infected. A range of laboratory investigations are performed, where possible, to diagnose the disease and assess its course and complications. The confidence of a diagnosis can be compromised if laboratory tests are not available. One comprising factor is the number of febrile illnesses present in Africa, such as malaria or typhoid fever that could potentially exhibit similar symptoms, particularly for non-specific manifestations of Lassa fever. In cases with abdominal pain, in countries where Lassa is common, Lassa fever is often misdiagnosed as appendicitis and intussusception which delays treatment with the antiviral ribavirin. In West Africa, where Lassa is most common, it is difficult to diagnose due to the absence of proper equipment to perform testing. The FDA has yet to approve a widely validated laboratory test for Lassa, but there are tests that have been able to provide definitive proof of the presence of the LASV virus. These tests include cell cultures, PCR, ELISA antigen assays, plaque neutralization assays, and immunofluorescence essays. However, immunofluorescence essays provide less definitive proof of Lassa infection. An ELISA test for antigen and Immunoglobulin M antibodies give 88% sensitivity and 90% specificity for the presence of the infection. Other laboratory findings in Lassa fever include lymphocytopenia (low white blood cell count), thrombocytopenia (low platelets), and elevated aspartate transaminase levels in the blood. Lassa fever virus can also be found in cerebrospinal fluid. Control of the "Mastomys" rodent population is impractical, so measures focus on keeping rodents out of homes and food supplies, encouraging effective personal hygiene, storing grain and other foodstuffs in rodent-proof containers, and disposing of garbage far from the home to help sustain clean households. Gloves, masks, laboratory coats, and goggles are advised while in contact with an infected person, to avoid contact with blood and body fluids. These issues in many countries are monitored by a department of public health. In less developed countries, these types of organizations may not have the necessary means to effectively control outbreaks. There is no vaccine for humans as of 2019. Researchers at the United States Army Medical Research Institute of Infectious Diseases facility had a promising vaccine candidate in 2002. They have developed a replication-competent vaccine against Lassa virus based on recombinant vesicular stomatitis virus vectors expressing the Lassa virus glycoprotein. After a single intramuscular injection, test primates have survived lethal challenge, while showing no clinical symptoms. Treatment is directed at addressing dehydration and improving symptoms. All persons suspected of Lassa fever infection should be admitted to isolation facilities and their body fluids and excreta properly disposed of. The antiviral medication ribavirin has been recommended, but evidence to support its use is weak. Some evidence has found that it may worsen outcomes in certain cases. Fluid replacement, blood transfusions, and medication for low blood pressure may be required. Intravenous interferon therapy has also been used. When Lassa fever infects pregnant women late in their third trimester, inducing delivery is necessary for the mother to have a good chance of survival. This is because the virus has an affinity for the placenta and other highly vascular tissues. The fetus has only a one in ten chance of survival no matter what course of action is taken; hence, the focus is always on saving the life of the mother. Following delivery, women should receive the same treatment as other people with Lassa fever. About 15–20% of hospitalized people with Lassa fever will die from the illness. The overall case fatality rate is estimated to be 1%, but during epidemics, mortality can climb as high as 50%. The mortality rate is greater than 80% when it occurs in pregnant women during their third trimester; fetal death also occurs in nearly all those cases. Abortion decreases the risk of death to the mother. Some survivors experience lasting effects of the disease, and can include partial or complete deafness. Because of treatment with ribavirin, fatality rates have declined. There are about 300,000 to 500,000 cases which result in 5,000 deaths a year. One estimate places the number as high as 3 million cases per year. Estimates of Lassa fever are complicated by the lack of easy-available diagnosis, limited public health surveillance infrastructure, and high clustering of incidence near high intensity sampling. The infection affects females 1.2 times more than males. The age group predominantly infected is 21-30 years. Lassa high risk areas are near the western and eastern extremes of West Africa. As of 2018, the Lassa belt includes Guinea, Nigeria, Sierra Leone and Liberia. As of 2003, 10-16% of people in Sierra Leone and Liberia admitted to hospital had the virus. The case fatality rate for those who are hospitalized for the disease is about 15-20%. Research showed a twofold increase risk of infection for those living in close proximity to someone with infection symptoms within the last year. The high risk areas cannot be well defined by any known biogeographical or environmental breaks except for the multimammate rat, particularly Guinea (Kindia, Faranah and Nzerekore regions), Liberia (mostly in Lofa, Bong, and Nimba counties), Nigeria (in about 10 of 36 states) and Sierra Leone (typically from Kenema and Kailahun districts). It is less common in the Central African Republic, Mali, Senegal and other nearby countries, and less common yet in Ghana and the Democratic Republic of the Congo. Benin had its first confirmed cases in 2014, and Togo had its first confirmed cases in 2016. As of 2013, the spread of Lassa outside of West Africa had been very limited. Twenty to thirty cases had been described in Europe, as being caused by importation through infected individuals. These cases found outside of West Africa were found to have a high fatality risk because of the delay of diagnosis and treatment due to being unaware of the risk associated with the symptoms. Imported cases have not manifested in larger epidemics outside of Africa due to a lack of human to human transmission in hospital settings. An exception had occurred in 2003 when a healthcare worker became infected before the person showed clear symptoms. An outbreak of Lassa fever occurred in Nigeria during 2018 and spread to 18 of the country's states; it was the largest outbreak of Lassa recorded. In February 25, 2018, there were 1081 suspected cases and 90 reported deaths; 317 of the cases and 72 deaths were confirmed as Lassa which increased to a total of 431 reported cases in 2018. The total cases in Nigeria in 2019 was 810 with 167 deaths, the largest case fatality rate(23.3%)till then. The epidemic started from the second week of the January. By the tenth week the total number of cases has risen to 855 and deaths to 144, the case fatality rate of 16.8%. Lassa fever is endemic in Liberia. From 1 January 2017 through 23 January 2018, 91 suspected cases were reported from six counties: Bong, Grand Bassa, Grand Kru, Lofa, Margibi, and Nimba. Thirty-three of these cases were laboratory confirmed, including 15 deaths (case fatality rate for confirmed cases = 45.4%). In February 2020, a total of 24 confirmed cases with nine associated deaths has been reported from nine health districts in six counties.Grand Bossa and Bong counties account for 20 of the confirmed cases. The disease was identified in Nigeria in 1969. It is named after the town Lassa in which it was discovered. A prominent expert in the disease, Aniru Conteh, died from the disease. The Lassa virus is one of several viruses identified by WHO as a likely cause of a future epidemic. They therefore list it for urgent research and development to develop new diagnostic tests, vaccines, and medicines. In 2007, SIGA Technologies, studied a medication in guinea pig with Lassa fever. Work on a vaccine is continuing, with multiple approaches showing positive results in animal trials.
https://en.wikipedia.org/wiki?curid=17887